modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-06 18:27:02
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 544
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-06 18:26:43
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ghadeermobasher/Originalbiobert-v1.1-BioRED-CD-128-32-30
|
ghadeermobasher
| 2022-07-13T17:47:28Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-13T17:05:57Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: Originalbiobert-v1.1-BioRED-CD-128-32-30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Originalbiobert-v1.1-BioRED-CD-128-32-30
This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Precision: 0.9994
- Recall: 1.0
- F1: 0.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30.0
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.10.3
|
ticoAg/distilbert-base-uncased-finetuned-emotion
|
ticoAg
| 2022-07-13T17:18:10Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-13T17:00:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9261470780516246
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2148
- Accuracy: 0.926
- F1: 0.9261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8297 | 1.0 | 250 | 0.3235 | 0.9015 | 0.8977 |
| 0.2504 | 2.0 | 500 | 0.2148 | 0.926 | 0.9261 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.7.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
bothrajat/testpyramidsrnd
|
bothrajat
| 2022-07-13T17:05:25Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-07-13T15:57:34Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: bothrajat/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
public-data/AnimeGANv3-portrait-sketch
|
public-data
| 2022-07-13T17:02:13Z | 0 | 2 | null |
[
"onnx",
"region:us"
] | null | 2022-07-13T16:59:59Z |
# AnimeGANv3 portrait sketch
- https://github.com/TachibanaYoshino/AnimeGANv3
- https://docs.google.com/uc?export=download&id=1F6BSJY3HibzQ08kE_al6pkXd1evxS40s
|
gemasphi/laprador_pt
|
gemasphi
| 2022-07-13T15:37:55Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-07-13T15:37:48Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# gemasphi/laprador_pt
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('gemasphi/laprador_pt')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('gemasphi/laprador_pt')
model = AutoModel.from_pretrained('gemasphi/laprador_pt')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=gemasphi/laprador_pt)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
IlyaGusev/xlm_roberta_large_headline_cause_simple
|
IlyaGusev
| 2022-07-13T15:36:36Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"xlm-roberta-large",
"ru",
"en",
"dataset:IlyaGusev/headline_cause",
"arxiv:2108.12626",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language:
- ru
- en
tags:
- xlm-roberta-large
datasets:
- IlyaGusev/headline_cause
license: apache-2.0
widget:
- text: "Песков опроверг свой перевод на удаленку</s>Дмитрий Песков перешел на удаленку"
---
# XLM-RoBERTa HeadlineCause Simple
## Model description
This model was trained to predict the presence of causal relations between two headlines. This model is for the Simple task with 3 possible labels: A causes B, B causes A, no causal relation. English and Russian languages are supported.
You can use hosted inference API to infer a label for a headline pair. To do this, you shoud seperate headlines with ```</s>``` token.
For example:
```
Песков опроверг свой перевод на удаленку</s>Дмитрий Песков перешел на удаленку
```
## Intended uses & limitations
#### How to use
```python
from tqdm.notebook import tqdm
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
def get_batch(data, batch_size):
start_index = 0
while start_index < len(data):
end_index = start_index + batch_size
batch = data[start_index:end_index]
yield batch
start_index = end_index
def pipe_predict(data, pipe, batch_size=64):
raw_preds = []
for batch in tqdm(get_batch(data, batch_size)):
raw_preds += pipe(batch)
return raw_preds
MODEL_NAME = TOKENIZER_NAME = "IlyaGusev/xlm_roberta_large_headline_cause_simple"
tokenizer = AutoTokenizer.from_pretrained(TOKENIZER_NAME, do_lower_case=False)
model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME)
model.eval()
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, framework="pt", return_all_scores=True)
texts = [
(
"Judge issues order to allow indoor worship in NC churches",
"Some local churches resume indoor services after judge lifted NC governor’s restriction"
),
(
"Gov. Kevin Stitt defends $2 million purchase of malaria drug touted by Trump",
"Oklahoma spent $2 million on malaria drug touted by Trump"
),
(
"Песков опроверг свой перевод на удаленку",
"Дмитрий Песков перешел на удаленку"
)
]
pipe_predict(texts, pipe)
```
#### Limitations and bias
The models are intended to be used on news headlines. No other limitations are known.
## Training data
* HuggingFace dataset: [IlyaGusev/headline_cause](https://huggingface.co/datasets/IlyaGusev/headline_cause)
* GitHub: [IlyaGusev/HeadlineCause](https://github.com/IlyaGusev/HeadlineCause)
## Training procedure
* Notebook: [HeadlineCause](https://colab.research.google.com/drive/1NAnD0OJ0TnYCJRsHpYUyYkjr_yi8ObcA)
* Stand-alone script: [train.py](https://github.com/IlyaGusev/HeadlineCause/blob/main/headline_cause/train.py)
## Eval results
Evaluation results can be found in the [arxiv paper](https://arxiv.org/pdf/2108.12626.pdf).
### BibTeX entry and citation info
```bibtex
@misc{gusev2021headlinecause,
title={HeadlineCause: A Dataset of News Headlines for Detecting Causalities},
author={Ilya Gusev and Alexey Tikhonov},
year={2021},
eprint={2108.12626},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
IlyaGusev/xlm_roberta_large_headline_cause_full
|
IlyaGusev
| 2022-07-13T15:35:52Z | 154 | 3 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"xlm-roberta-large",
"ru",
"en",
"dataset:IlyaGusev/headline_cause",
"arxiv:2108.12626",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language:
- ru
- en
tags:
- xlm-roberta-large
datasets:
- IlyaGusev/headline_cause
license: apache-2.0
widget:
- text: "Песков опроверг свой перевод на удаленку</s>Дмитрий Песков перешел на удаленку"
---
# XLM-RoBERTa HeadlineCause Full
## Model description
This model was trained to predict the presence of causal relations between two headlines. This model is for the Full task with 7 possible labels: titles are almost the same, A causes B, B causes A, A refutes B, B refutes A, A linked with B in another way, A is not linked to B. English and Russian languages are supported.
You can use hosted inference API to infer a label for a headline pair. To do this, you shoud seperate headlines with ```</s>``` token.
For example:
```
Песков опроверг свой перевод на удаленку</s>Дмитрий Песков перешел на удаленку
```
## Intended uses & limitations
#### How to use
```python
from tqdm.notebook import tqdm
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
def get_batch(data, batch_size):
start_index = 0
while start_index < len(data):
end_index = start_index + batch_size
batch = data[start_index:end_index]
yield batch
start_index = end_index
def pipe_predict(data, pipe, batch_size=64):
raw_preds = []
for batch in tqdm(get_batch(data, batch_size)):
raw_preds += pipe(batch)
return raw_preds
MODEL_NAME = TOKENIZER_NAME = "IlyaGusev/xlm_roberta_large_headline_cause_full"
tokenizer = AutoTokenizer.from_pretrained(TOKENIZER_NAME, do_lower_case=False)
model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME)
model.eval()
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, framework="pt", return_all_scores=True)
texts = [
(
"Judge issues order to allow indoor worship in NC churches",
"Some local churches resume indoor services after judge lifted NC governor’s restriction"
),
(
"Gov. Kevin Stitt defends $2 million purchase of malaria drug touted by Trump",
"Oklahoma spent $2 million on malaria drug touted by Trump"
),
(
"Песков опроверг свой перевод на удаленку",
"Дмитрий Песков перешел на удаленку"
)
]
pipe_predict(texts, pipe)
```
#### Limitations and bias
The models are intended to be used on news headlines. No other limitations are known.
## Training data
* HuggingFace dataset: [IlyaGusev/headline_cause](https://huggingface.co/datasets/IlyaGusev/headline_cause)
* GitHub: [IlyaGusev/HeadlineCause](https://github.com/IlyaGusev/HeadlineCause)
## Training procedure
* Notebook: [HeadlineCause](https://colab.research.google.com/drive/1NAnD0OJ0TnYCJRsHpYUyYkjr_yi8ObcA)
* Stand-alone script: [train.py](https://github.com/IlyaGusev/HeadlineCause/blob/main/headline_cause/train.py)
## Eval results
Evaluation results can be found in the [arxiv paper](https://arxiv.org/pdf/2108.12626.pdf).
### BibTeX entry and citation info
```bibtex
@misc{gusev2021headlinecause,
title={HeadlineCause: A Dataset of News Headlines for Detecting Causalities},
author={Ilya Gusev and Alexey Tikhonov},
year={2021},
eprint={2108.12626},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
IlyaGusev/sber_rut5_filler
|
IlyaGusev
| 2022-07-13T15:34:32Z | 31 | 3 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
language:
- ru
license: apache-2.0
widget:
- text: Эта блядь меня заебала</s> Эта <extra_id_0> меня <extra_id_1>
---
|
IlyaGusev/rubertconv_toxic_clf
|
IlyaGusev
| 2022-07-13T15:34:11Z | 14,240 | 13 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language:
- ru
tags:
- text-classification
license: apache-2.0
---
# RuBERTConv Toxic Classifier
## Model description
Based on [rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model
## Intended uses & limitations
#### How to use
Colab: [link](https://colab.research.google.com/drive/1veKO9hke7myxKigZtZho_F-UM2fD9kp8)
```python
from transformers import pipeline
model_name = "IlyaGusev/rubertconv_toxic_clf"
pipe = pipeline("text-classification", model=model_name, tokenizer=model_name, framework="pt")
text = "Ты придурок из интернета"
pipe([text])
```
## Training data
Datasets:
- [2ch]( https://www.kaggle.com/blackmoon/russian-language-toxic-comments)
- [Odnoklassniki](https://www.kaggle.com/alexandersemiletov/toxic-russian-comments)
- [Toloka Persona Chat Rus](https://toloka.ai/ru/datasets)
- [Koziev's Conversations](https://github.com/Koziev/NLP_Datasets/blob/master/Conversations/Data) with [toxic words vocabulary](https://www.dropbox.com/s/ou6lx03b10yhrfl/bad_vocab.txt.tar.gz)
Augmentations:
- ё -> е
- Remove or add "?" or "!"
- Fix CAPS
- Concatenate toxic and non-toxic texts
- Concatenate two non-toxic texts
- Add toxic words from vocabulary
- Add typos
- Mask toxic words with "*", "@", "$"
## Training procedure
TBA
|
allermat/distilbert-base-uncased-finetuned-emotion
|
allermat
| 2022-07-13T15:20:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-09T16:16:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9233300539962602
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2244
- Accuracy: 0.923
- F1: 0.9233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8412 | 1.0 | 250 | 0.3186 | 0.904 | 0.9022 |
| 0.2501 | 2.0 | 500 | 0.2244 | 0.923 | 0.9233 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
carlosaguayo/distilbert-base-uncased-finetuned-emotion
|
carlosaguayo
| 2022-07-13T14:50:13Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9295
- name: F1
type: f1
value: 0.9299984897610097
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1689
- Accuracy: 0.9295
- F1: 0.9300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2853 | 1.0 | 250 | 0.1975 | 0.9235 | 0.9233 |
| 0.1568 | 2.0 | 500 | 0.1689 | 0.9295 | 0.9300 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
jpalojarvi/finetuning-sentiment-model-3000-samples
|
jpalojarvi
| 2022-07-13T14:48:18Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-13T14:14:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.86
- name: F1
type: f1
value: 0.8590604026845637
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3239
- Accuracy: 0.86
- F1: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
nawta/wav2vec2-onomatopoeia-finetune_smalldata_ESC50pretrained_5
|
nawta
| 2022-07-13T14:43:29Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-13T14:30:32Z |
---
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-onomatopoeia-finetune_smalldata_ESC50pretrained_5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-onomatopoeia-finetune_smalldata_ESC50pretrained_5
This model is a fine-tuned version of [/root/workspace/wav2vec2-pretrained_with_ESC50_10000epochs_32batch_2022-07-09_22-16-46/pytorch_model.bin](https://huggingface.co//root/workspace/wav2vec2-pretrained_with_ESC50_10000epochs_32batch_2022-07-09_22-16-46/pytorch_model.bin) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
fxmarty/20220713-h14m38s16_example_conll2003
|
fxmarty
| 2022-07-13T14:38:21Z | 0 | 0 | null |
[
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"region:us"
] |
token-classification
| 2022-07-13T14:38:16Z |
---
pipeline_tag: token-classification
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
tags:
- distilbert
---
**task**: `token-classification`
**Backend:** `sagemaker-training`
**Backend args:** `{'instance_type': 'ml.g4dn.2xlarge', 'supported_instructions': None}`
**Number of evaluation samples:** `All dataset`
Fixed parameters:
* **model_name_or_path**: `elastic/distilbert-base-uncased-finetuned-conll03-english`
* **dataset**:
* **path**: `conll2003`
* **eval_split**: `validation`
* **data_keys**: `{'primary': 'tokens'}`
* **ref_keys**: `['ner_tags']`
* **calibration_split**: `train`
* **quantization_approach**: `static`
* **operators_to_quantize**: `['Add', 'MatMul']`
* **per_channel**: `False`
* **calibration**:
* **method**: `minmax`
* **num_calibration_samples**: `100`
* **framework**: `onnxruntime`
* **framework_args**:
* **opset**: `11`
* **optimization_level**: `1`
* **aware_training**: `False`
Benchmarked parameters:
* **node_exclusion**: `[]`, `['layernorm', 'gelu', 'residual', 'gather', 'softmax']`
# Evaluation
## Non-time metrics
| node_exclusion | | precision (original) | precision (optimized) | | recall (original) | recall (optimized) | | f1 (original) | f1 (optimized) | | accuracy (original) | accuracy (optimized) |
| :------------------------------------------------------: | :-: | :------------------: | :-------------------: | :-: | :---------------: | :----------------: | :-: | :-----------: | :------------: | :-: | :-----------------: | :------------------: |
| `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 0.936 | 0.904 | \| | 0.944 | 0.921 | \| | 0.940 | 0.912 | \| | 0.988 | 0.984 |
| `[]` | \| | 0.936 | 0.065 | \| | 0.944 | 0.243 | \| | 0.940 | 0.103 | \| | 0.988 | 0.357 |
## Time metrics
Time benchmarks were run for 15 seconds per config.
Below, time metrics for batch size = 4, input length = 64.
| node_exclusion | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :------------------------------------------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 114.51 | 53.59 | \| | 8.73 | 18.67 |
| `[]` | \| | 90.67 | 59.55 | \| | 11.07 | 16.87 |
|
bothrajat/q-FrozenLake-v1-4x4-Slippery
|
bothrajat
| 2022-07-13T14:02:16Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-13T10:06:49Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- metrics:
- type: mean_reward
value: 0.04 +/- 0.19
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="bothrajat/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
johntang/finetuning-sentiment-model-3000-samples
|
johntang
| 2022-07-13T14:02:11Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-17T18:54:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.8786885245901639
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3426
- Accuracy: 0.8767
- F1: 0.8787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Chris1/q-Taxi-v3
|
Chris1
| 2022-07-13T13:53:13Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-13T13:53:02Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.46 +/- 2.59
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Chris1/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-B-2022-07-12
|
yuekai
| 2022-07-13T13:51:59Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-07-12T01:54:35Z |
---
license: apache-2.0
---
### How to clone this repo
```
sudo apt-get install git-lfs
git clone https://huggingface.co/yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-B-2022-07-12
cd https://huggingface.co/yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-B-2022-07-12
git lfs pull
```
|
yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-A-2022-07-12
|
yuekai
| 2022-07-13T13:49:43Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-07-13T02:19:09Z |
---
license: apache-2.0
---
### How to clone this repo
```
sudo apt-get install git-lfs
git clone https://huggingface.co/yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-A-2022-07-12
cd https://huggingface.co/yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-A-2022-07-12
git lfs pull
```
|
ArneD/distilbert-base-uncased-finetuned-emotion
|
ArneD
| 2022-07-13T13:43:21Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-21T06:42:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9218894133133121
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2147
- Accuracy: 0.922
- F1: 0.9219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8205 | 1.0 | 250 | 0.3028 | 0.909 | 0.9061 |
| 0.245 | 2.0 | 500 | 0.2147 | 0.922 | 0.9219 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
fxmarty/20220713-h13m33s02_example_conll2003
|
fxmarty
| 2022-07-13T13:33:09Z | 0 | 0 | null |
[
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"region:us"
] |
token-classification
| 2022-07-13T13:33:02Z |
---
pipeline_tag: token-classification
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
tags:
- distilbert
---
**task**: `token-classification`
**Backend:** `sagemaker-training`
**Backend args:** `{'instance_type': 'ml.g4dn.2xlarge', 'supported_instructions': None}`
**Number of evaluation samples:** `All dataset`
Fixed parameters:
* **model_name_or_path**: `elastic/distilbert-base-uncased-finetuned-conll03-english`
* **dataset**:
* **path**: `conll2003`
* **eval_split**: `validation`
* **data_keys**: `{'primary': 'tokens'}`
* **ref_keys**: `['ner_tags']`
* **calibration_split**: `train`
* **quantization_approach**: `static`
* **operators_to_quantize**: `['Add', 'MatMul']`
* **per_channel**: `False`
* **calibration**:
* **method**: `minmax`
* **num_calibration_samples**: `100`
* **framework**: `onnxruntime`
* **framework_args**:
* **opset**: `11`
* **optimization_level**: `1`
* **aware_training**: `False`
Benchmarked parameters:
* **node_exclusion**: `[]`, `['layernorm', 'gelu', 'residual', 'gather', 'softmax']`
# Evaluation
## Non-time metrics
| node_exclusion | | precision (original) | precision (optimized) | | recall (original) | recall (optimized) | | f1 (original) | f1 (optimized) | | accuracy (original) | accuracy (optimized) |
| :------------------------------------------------------: | :-: | :------------------: | :-------------------: | :-: | :---------------: | :----------------: | :-: | :-----------: | :------------: | :-: | :-----------------: | :------------------: |
| `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 0.936 | 0.904 | \| | 0.944 | 0.921 | \| | 0.940 | 0.912 | \| | 0.988 | 0.984 |
| `[]` | \| | 0.936 | 0.065 | \| | 0.944 | 0.243 | \| | 0.940 | 0.103 | \| | 0.988 | 0.357 |
## Time metrics
Time benchmarks were run for 15 seconds per config.
Below, time metrics for batch size = 4, input length = 64.
| node_exclusion | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :------------------------------------------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 103.46 | 53.77 | \| | 9.67 | 18.60 |
| `[]` | \| | 90.62 | 65.86 | \| | 11.07 | 15.20 |
|
hossay/distilbert-base-uncased-finetuned-ner
|
hossay
| 2022-07-13T13:32:51Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-10T00:51:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9263064854712186
- name: Recall
type: recall
value: 0.9379125181787672
- name: F1
type: f1
value: 0.9320733740967203
- name: Accuracy
type: accuracy
value: 0.9838117781625813
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0614
- Precision: 0.9263
- Recall: 0.9379
- F1: 0.9321
- Accuracy: 0.9838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2418 | 1.0 | 878 | 0.0709 | 0.9168 | 0.9242 | 0.9204 | 0.9806 |
| 0.0514 | 2.0 | 1756 | 0.0622 | 0.9175 | 0.9338 | 0.9255 | 0.9826 |
| 0.0306 | 3.0 | 2634 | 0.0614 | 0.9263 | 0.9379 | 0.9321 | 0.9838 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
andreaschandra/distilbert-base-uncased-finetuned-emotion
|
andreaschandra
| 2022-07-13T13:16:46Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-02T07:02:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9240890586429673
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2186
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8218 | 1.0 | 250 | 0.3165 | 0.9025 | 0.9001 |
| 0.2494 | 2.0 | 500 | 0.2186 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
frahman/distilbert-base-uncased-finetuned-emotion
|
frahman
| 2022-07-13T12:58:49Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9205
- name: F1
type: f1
value: 0.9206660865871332
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2202
- Accuracy: 0.9205
- F1: 0.9207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8234 | 1.0 | 250 | 0.3185 | 0.9025 | 0.8992 |
| 0.2466 | 2.0 | 500 | 0.2202 | 0.9205 | 0.9207 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jordyvl/udpos28-sm-all-POS
|
jordyvl
| 2022-07-13T12:23:52Z | 8 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:udpos28",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-13T12:03:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- udpos28
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: udpos28-sm-all-POS
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: udpos28
type: udpos28
args: en
metrics:
- name: Precision
type: precision
value: 0.9586517032792105
- name: Recall
type: recall
value: 0.9588997472284696
- name: F1
type: f1
value: 0.9587757092110369
- name: Accuracy
type: accuracy
value: 0.964820639556654
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# udpos28-sm-all-POS
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the udpos28 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1479
- Precision: 0.9587
- Recall: 0.9589
- F1: 0.9588
- Accuracy: 0.9648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1261 | 1.0 | 4978 | 0.1358 | 0.9513 | 0.9510 | 0.9512 | 0.9581 |
| 0.0788 | 2.0 | 9956 | 0.1326 | 0.9578 | 0.9578 | 0.9578 | 0.9642 |
| 0.0424 | 3.0 | 14934 | 0.1479 | 0.9587 | 0.9589 | 0.9588 | 0.9648 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Sreevishnu/funnel-transformer-small-imdb
|
Sreevishnu
| 2022-07-13T12:17:17Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"funnel",
"text-classification",
"sentiment-analysis",
"en",
"dataset:imdb",
"arxiv:2006.03236",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-15T18:48:18Z |
---
license: apache-2.0
language: en
widget:
- text: "In the garden of wonderment that is the body of work by the animation master Hayao Miyazaki, his 2001 gem 'Spirited Away' is at once one of his most accessible films to a Western audience and the one most distinctly rooted in Japanese culture and lore. The tale of Chihiro, a 10 year old girl who resents being moved away from all her friends, only to find herself working in a bathhouse for the gods, doesn't just use its home country's fraught relationship with deities as a backdrop. Never remotely didactic, the film is ultimately a self-fulfilment drama that touches on religious, ethical, ecological and psychological issues.
It's also a fine children's film, the kind that elicits a deepening bond across repeat viewings and the passage of time, mostly because Miyazaki refuses to talk down to younger viewers. That's been a constant in all of his filmography, but it's particularly conspicuous here because the stakes for its young protagonist are bigger than in most of his previous features aimed at younger viewers. It involves conquering fears and finding oneself in situations where safety is not a given.
There are so many moving parts in Spirited Away, from both a thematic and technical point of view, that pinpointing what makes Spirited Away stand out from an already outstanding body of work becomes as challenging as a meeting with Yubaba. But I think it comes down to an ability to deal with heady, complex subject matter from a young girl's perspective without diluting or lessening its resonance. Miyazaki has made a loopy, demanding work of art that asks your inner child to come out and play. There are few high-wire acts in all of movie-dom as satisfying as that."
datasets:
- imdb
tags:
- sentiment-analysis
---
# Funnel Transformer small (B4-4-4 with decoder) fine-tuned on IMDB for Sentiment Analysis
These are the model weights for the Funnel Transformer small model fine-tuned on the IMDB dataset for performing Sentiment Analysis with `max_position_embeddings=1024`.
The original model weights for English language are from [funnel-transformer/small](https://huggingface.co/funnel-transformer/small) and it uses a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in [this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in [this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference between english and English.
## Fine-tuning Results
| | Accuracy | Precision | Recall | F1 |
|-------------------------------|----------|-----------|----------|----------|
| funnel-transformer-small-imdb | 0.956530 | 0.952286 | 0.961075 | 0.956661 |
## Model description (from [funnel-transformer/small](https://huggingface.co/funnel-transformer/small))
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs.
# How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained(
"Sreevishnu/funnel-transformer-small-imdb",
use_fast=True)
model = AutoModelForSequenceClassification.from_pretrained(
"Sreevishnu/funnel-transformer-small-imdb",
num_labels=2,
max_position_embeddings=1024)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
# Example App
https://lazy-film-reviews-7gif2bz4sa-ew.a.run.app/
Project repo: https://github.com/akshaydevml/lazy-film-reviews
|
facebook/deit-tiny-patch16-224
|
facebook
| 2022-07-13T11:53:31Z | 35,980 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"vit",
"image-classification",
"dataset:imagenet",
"arxiv:2012.12877",
"arxiv:2006.03677",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet
---
# Data-efficient Image Transformer (tiny-sized model)
Data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman.
Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
This model is actually a more efficiently trained Vision Transformer (ViT).
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pre-trained and fine-tuned on a large collection of images in a supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/deit) to look for
fine-tuned versions on a task that interests you.
### How to use
Since this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, ViTForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/deit-tiny-patch16-224')
model = ViTForImageClassification.from_pretrained('facebook/deit-tiny-patch16-224')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.
## Training data
The ViT model was pretrained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L78).
At inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation.
### Pretraining
The model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.
## Evaluation results
| Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL |
|---------------------------------------|-------------------------|-------------------------|----------|------------------------------------------------------------------|
| **DeiT-tiny** | **72.2** | **91.1** | **5M** | **https://huggingface.co/facebook/deit-tiny-patch16-224** |
| DeiT-small | 79.9 | 95.0 | 22M | https://huggingface.co/facebook/deit-small-patch16-224 |
| DeiT-base | 81.8 | 95.6 | 86M | https://huggingface.co/facebook/deit-base-patch16-224 |
| DeiT-tiny distilled | 74.5 | 91.9 | 6M | https://huggingface.co/facebook/deit-tiny-distilled-patch16-224 |
| DeiT-small distilled | 81.2 | 95.4 | 22M | https://huggingface.co/facebook/deit-small-distilled-patch16-224 |
| DeiT-base distilled | 83.4 | 96.5 | 87M | https://huggingface.co/facebook/deit-base-distilled-patch16-224 |
| DeiT-base 384 | 82.9 | 96.2 | 87M | https://huggingface.co/facebook/deit-base-patch16-384 |
| DeiT-base distilled 384 (1000 epochs) | 85.2 | 97.2 | 88M | https://huggingface.co/facebook/deit-base-distilled-patch16-384 |
Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```bibtex
@misc{touvron2021training,
title={Training data-efficient image transformers & distillation through attention},
author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou},
year={2021},
eprint={2012.12877},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
```
|
facebook/deit-small-distilled-patch16-224
|
facebook
| 2022-07-13T11:41:21Z | 4,247 | 6 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"deit",
"image-classification",
"vision",
"dataset:imagenet",
"arxiv:2012.12877",
"arxiv:2006.03677",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- imagenet
---
# Distilled Data-efficient Image Transformer (small-sized model)
Distilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman.
Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
This model is a distilled Vision Transformer (ViT). It uses a distillation token, besides the class token, to effectively learn from a teacher (CNN) during both pre-training and fine-tuning. The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/deit) to look for
fine-tuned versions on a task that interests you.
### How to use
Since this model is a distilled ViT model, you can plug it into DeiTModel, DeiTForImageClassification or DeiTForImageClassificationWithTeacher. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, DeiTForImageClassificationWithTeacher
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/deit-small-distilled-patch16-224')
model = DeiTForImageClassificationWithTeacher.from_pretrained('facebook/deit-small-distilled-patch16-224')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.
## Training data
This model was pretrained and fine-tuned with distillation on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L78).
At inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation.
### Pretraining
The model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.
## Evaluation results
| Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL |
|---------------------------------------|-------------------------|-------------------------|----------|------------------------------------------------------------------|
| DeiT-tiny | 72.2 | 91.1 | 5M | https://huggingface.co/facebook/deit-tiny-patch16-224 |
| DeiT-small | 79.9 | 95.0 | 22M | https://huggingface.co/facebook/deit-small-patch16-224 |
| DeiT-base | 81.8 | 95.6 | 86M | https://huggingface.co/facebook/deit-base-patch16-224 |
| DeiT-tiny distilled | 74.5 | 91.9 | 6M | https://huggingface.co/facebook/deit-tiny-distilled-patch16-224 |
| **DeiT-small distilled** | **81.2** | **95.4** | **22M** | **https://huggingface.co/facebook/deit-small-distilled-patch16-224** |
| DeiT-base distilled | 83.4 | 96.5 | 87M | https://huggingface.co/facebook/deit-base-distilled-patch16-224 |
| DeiT-base 384 | 82.9 | 96.2 | 87M | https://huggingface.co/facebook/deit-base-patch16-384 |
| DeiT-base distilled 384 (1000 epochs) | 85.2 | 97.2 | 88M | https://huggingface.co/facebook/deit-base-distilled-patch16-384 |
Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```bibtex
@misc{touvron2021training,
title={Training data-efficient image transformers & distillation through attention},
author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou},
year={2021},
eprint={2012.12877},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
```
|
facebook/deit-base-patch16-384
|
facebook
| 2022-07-13T11:41:03Z | 349 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"vit",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2012.12877",
"arxiv:2006.03677",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet-1k
---
# Data-efficient Image Transformer (base-sized model)
Data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 384x384. It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman.
Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
This model is actually a more efficiently trained Vision Transformer (ViT).
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pre-trained at resolution 224 and fine-tuned at resolution 384 on a large collection of images in a supervised fashion, namely ImageNet-1k.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/deit) to look for
fine-tuned versions on a task that interests you.
### How to use
Since this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, ViTForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/deit-base-patch16-384')
model = ViTForImageClassification.from_pretrained('facebook/deit-base-patch16-384')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.
## Training data
The ViT model was pretrained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L78).
At inference time, images are resized/rescaled to the same resolution (438x438), center-cropped at 384x384 and normalized across the RGB channels with the ImageNet mean and standard deviation.
### Pretraining
The model was trained on a single 8-GPU node for 3 days. Pre-training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.
## Evaluation results
| Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL |
|---------------------------------------|-------------------------|-------------------------|----------|------------------------------------------------------------------|
| DeiT-tiny | 72.2 | 91.1 | 5M | https://huggingface.co/facebook/deit-tiny-patch16-224 |
| DeiT-small | 79.9 | 95.0 | 22M | https://huggingface.co/facebook/deit-small-patch16-224 |
| DeiT-base | 81.8 | 95.6 | 86M | https://huggingface.co/facebook/deit-base-patch16-224 |
| DeiT-tiny distilled | 74.5 | 91.9 | 6M | https://huggingface.co/facebook/deit-tiny-distilled-patch16-224 |
| DeiT-small distilled | 81.2 | 95.4 | 22M | https://huggingface.co/facebook/deit-small-distilled-patch16-224 |
| DeiT-base distilled | 83.4 | 96.5 | 87M | https://huggingface.co/facebook/deit-base-distilled-patch16-224 |
| **DeiT-base 384** | **82.9** | **96.2** | **87M** | **https://huggingface.co/facebook/deit-base-patch16-384** |
| DeiT-base distilled 384 (1000 epochs) | 85.2 | 97.2 | 88M | https://huggingface.co/facebook/deit-base-distilled-patch16-384 |
Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```bibtex
@misc{touvron2021training,
title={Training data-efficient image transformers & distillation through attention},
author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou},
year={2021},
eprint={2012.12877},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
```
|
facebook/deit-base-patch16-224
|
facebook
| 2022-07-13T11:40:44Z | 144,060 | 13 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"vit",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2012.12877",
"arxiv:2006.03677",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet-1k
---
# Data-efficient Image Transformer (base-sized model)
Data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman.
Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
This model is actually a more efficiently trained Vision Transformer (ViT).
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pre-trained and fine-tuned on a large collection of images in a supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/deit) to look for
fine-tuned versions on a task that interests you.
### How to use
Since this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, ViTForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/deit-base-patch16-224')
model = ViTForImageClassification.from_pretrained('facebook/deit-base-patch16-224')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.
## Training data
The ViT model was pretrained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L78).
At inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation.
### Pretraining
The model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.
## Evaluation results
| Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL |
|---------------------------------------|-------------------------|-------------------------|----------|------------------------------------------------------------------|
| DeiT-tiny | 72.2 | 91.1 | 5M | https://huggingface.co/facebook/deit-tiny-patch16-224 |
| DeiT-small | 79.9 | 95.0 | 22M | https://huggingface.co/facebook/deit-small-patch16-224 |
| **DeiT-base** | **81.8** | **95.6** | **86M** | **https://huggingface.co/facebook/deit-base-patch16-224** |
| DeiT-tiny distilled | 74.5 | 91.9 | 6M | https://huggingface.co/facebook/deit-tiny-distilled-patch16-224 |
| DeiT-small distilled | 81.2 | 95.4 | 22M | https://huggingface.co/facebook/deit-small-distilled-patch16-224 |
| DeiT-base distilled | 83.4 | 96.5 | 87M | https://huggingface.co/facebook/deit-base-distilled-patch16-224 |
| DeiT-base 384 | 82.9 | 96.2 | 87M | https://huggingface.co/facebook/deit-base-patch16-384 |
| DeiT-base distilled 384 (1000 epochs) | 85.2 | 97.2 | 88M | https://huggingface.co/facebook/deit-base-distilled-patch16-384 |
Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```bibtex
@misc{touvron2021training,
title={Training data-efficient image transformers & distillation through attention},
author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou},
year={2021},
eprint={2012.12877},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
```
|
facebook/deit-base-distilled-patch16-224
|
facebook
| 2022-07-13T11:39:38Z | 16,934 | 23 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"deit",
"image-classification",
"vision",
"dataset:imagenet",
"arxiv:2012.12877",
"arxiv:2006.03677",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- imagenet
---
# Distilled Data-efficient Image Transformer (base-sized model)
Distilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman.
Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
This model is a distilled Vision Transformer (ViT). It uses a distillation token, besides the class token, to effectively learn from a teacher (CNN) during both pre-training and fine-tuning. The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/deit) to look for
fine-tuned versions on a task that interests you.
### How to use
Since this model is a distilled ViT model, you can plug it into DeiTModel, DeiTForImageClassification or DeiTForImageClassificationWithTeacher. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, DeiTForImageClassificationWithTeacher
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/deit-base-distilled-patch16-224')
model = DeiTForImageClassificationWithTeacher.from_pretrained('facebook/deit-base-distilled-patch16-224')
inputs = feature_extractor(images=image, return_tensors="pt")
# forward pass
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.
## Training data
This model was pretrained and fine-tuned with distillation on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L78).
At inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation.
### Pretraining
The model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.
## Evaluation results
| Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL |
|---------------------------------------|-------------------------|-------------------------|----------|------------------------------------------------------------------|
| DeiT-tiny | 72.2 | 91.1 | 5M | https://huggingface.co/facebook/deit-tiny-patch16-224 |
| DeiT-small | 79.9 | 95.0 | 22M | https://huggingface.co/facebook/deit-small-patch16-224 |
| DeiT-base | 81.8 | 95.6 | 86M | https://huggingface.co/facebook/deit-base-patch16-224 |
| DeiT-tiny distilled | 74.5 | 91.9 | 6M | https://huggingface.co/facebook/deit-tiny-distilled-patch16-224 |
| DeiT-small distilled | 81.2 | 95.4 | 22M | https://huggingface.co/facebook/deit-small-distilled-patch16-224 |
| **DeiT-base distilled** | **83.4** | **96.5** | **87M** | **https://huggingface.co/facebook/deit-base-distilled-patch16-224** |
| DeiT-base 384 | 82.9 | 96.2 | 87M | https://huggingface.co/facebook/deit-base-patch16-384 |
| DeiT-base distilled 384 (1000 epochs) | 85.2 | 97.2 | 88M | https://huggingface.co/facebook/deit-base-distilled-patch16-384 |
Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```bibtex
@misc{touvron2021training,
title={Training data-efficient image transformers & distillation through attention},
author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou},
year={2021},
eprint={2012.12877},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
```
|
matjesg/deepflash2_demo
|
matjesg
| 2022-07-13T10:54:35Z | 0 | 2 | null |
[
"onnx",
"image-segmentation",
"semantic-segmentation",
"deepflash2",
"arxiv:2111.06693",
"license:apache-2.0",
"region:us"
] |
image-segmentation
| 2022-05-31T09:43:39Z |
---
tags:
- image-segmentation
- semantic-segmentation
- deepflash2
license: apache-2.0
datasets:
- "cFOS in HC"
- "YFP in CTX"
---
# Demo models for

**Try in [Hugging Face Spaces](https://huggingface.co/spaces/matjesg/deepflash2)** 🤗🤗🤗
- **Task**: Image Segmentation / Semantic Segmentation
- **Paper**: The preprint of our paper is available on [arXiv](https://arxiv.org/pdf/2111.06693.pdf)
- **Data**: The cFOS in HC dataset ([Article](https://doi.org/10.7554/eLife.59780), [Data](https://doi.org/10.5061/dryad.4b8gtht9d)) describes the indirect immunofluorescent labeling of the transcription factor cFOS in different subregions of the hippocampus after behavioral testing of the mice.
- **Library**: See [github](https://github.com/matjesg/deepflash2/)
|
nawta/wav2vec2-onomatopoeia-finetune_smalldata_ESC50pretrained_2
|
nawta
| 2022-07-13T10:11:43Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-13T09:25:20Z |
---
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-onomatopoeia-finetune_smalldata_ESC50pretrained_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-onomatopoeia-finetune_smalldata_ESC50pretrained_2
This model is a fine-tuned version of [/root/workspace/wav2vec2-pretrained_with_ESC50_10000epochs_32batch_2022-07-09_22-16-46/pytorch_model.bin](https://huggingface.co//root/workspace/wav2vec2-pretrained_with_ESC50_10000epochs_32batch_2022-07-09_22-16-46/pytorch_model.bin) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6235
- Cer: 0.8973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.0097 | 23.81 | 500 | 2.6235 | 0.8973 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
casasdorjunior/t5-small-finetuned-cc-news-es-titles
|
casasdorjunior
| 2022-07-13T08:52:55Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:cc-news-es-titles",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-13T07:38:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cc-news-es-titles
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cc-news-es-titles
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cc-news-es-titles
type: cc-news-es-titles
args: default
metrics:
- name: Rouge1
type: rouge
value: 16.701
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cc-news-es-titles
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cc-news-es-titles dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6383
- Rouge1: 16.701
- Rouge2: 4.1265
- Rougel: 14.8175
- Rougelsum: 14.8193
- Gen Len: 18.9159
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:-------:|:---------:|:-------:|
| 2.8439 | 1.0 | 23133 | 2.6383 | 16.701 | 4.1265 | 14.8175 | 14.8193 | 18.9159 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
loz/Test
|
loz
| 2022-07-13T08:11:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-07-13T08:08:54Z |
me on a bike
going into the sunset
at night
with my dog running along side me
|
dsivakumar/text2sql
|
dsivakumar
| 2022-07-13T07:27:17Z | 28 | 2 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:wikisql",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-10T07:43:23Z |
---
language:
- en
datasets:
- wikisql
widget:
- text: "English to SQL: Show me the average age of of wines in Italy by provinces"
- text: "English to SQL: What is the current series where the new series began in June 2011?"
---
#import transformers
```
from transformers import (
T5ForConditionalGeneration,
T5Tokenizer,
)
#load model
model = T5ForConditionalGeneration.from_pretrained('dsivakumar/text2sql')
tokenizer = T5Tokenizer.from_pretrained('dsivakumar/text2sql')
#predict function
def get_sql(query,tokenizer,model):
source_text= "English to SQL: "+query
source_text = ' '.join(source_text.split())
source = tokenizer.batch_encode_plus([source_text],max_length= 128, pad_to_max_length=True, truncation=True, padding="max_length", return_tensors='pt')
source_ids = source['input_ids'] #.squeeze()
source_mask = source['attention_mask']#.squeeze()
generated_ids = model.generate(
input_ids = source_ids.to(dtype=torch.long),
attention_mask = source_mask.to(dtype=torch.long),
max_length=150,
num_beams=2,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True
)
preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) for g in generated_ids]
return preds
#test
query="Show me the average age of of wines in Italy by provinces"
sql = get_sql(query,tokenizer,model)
print(sql)
#https://huggingface.co/mrm8488/t5-small-finetuned-wikiSQL
def get_sql(query):
input_text = "translate English to SQL: %s </s>" % query
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'])
return tokenizer.decode(output[0])
query = "How many models were finetuned using BERT as base model?"
get_sql(query)
```
|
huggingartists/queen
|
huggingartists
| 2022-07-13T06:52:09Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/queen",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- huggingartists/queen
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/97bcb5755cb9780d76b37726a0ce4bef.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Queen</div>
<a href="https://genius.com/artists/queen">
<div style="text-align: center; font-size: 14px;">@queen</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Queen.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/queen).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/queen")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1jdprwq2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Queen's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2lvkoamo) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2lvkoamo/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/queen')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/queen")
model = AutoModelWithLMHead.from_pretrained("huggingartists/queen")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
abx/bert-finetuned-ner
|
abx
| 2022-07-13T06:15:23Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-13T06:04:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9341713529606351
- name: Recall
type: recall
value: 0.9505217098619994
- name: F1
type: f1
value: 0.9422756089422756
- name: Accuracy
type: accuracy
value: 0.9861070230176017
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0623
- Precision: 0.9342
- Recall: 0.9505
- F1: 0.9423
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0865 | 1.0 | 1756 | 0.0667 | 0.9166 | 0.9379 | 0.9271 | 0.9829 |
| 0.0397 | 2.0 | 3512 | 0.0560 | 0.9337 | 0.9522 | 0.9428 | 0.9867 |
| 0.0194 | 3.0 | 5268 | 0.0623 | 0.9342 | 0.9505 | 0.9423 | 0.9861 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu116
- Datasets 2.3.2
- Tokenizers 0.12.1
|
NimaBoscarino/STPushToHub-test2
|
NimaBoscarino
| 2022-07-13T05:57:37Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-07-13T05:49:12Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# NimaBoscarino/STPushToHub-test2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('NimaBoscarino/STPushToHub-test2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('NimaBoscarino/STPushToHub-test2')
model = AutoModel.from_pretrained('NimaBoscarino/STPushToHub-test2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=NimaBoscarino/STPushToHub-test2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 360 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 144,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
huggingtweets/kitsune__spirit
|
huggingtweets
| 2022-07-13T02:51:17Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/kitsune__spirit/1657680673292/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1523268231833739266/foV-CaZh_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">KitsuneSpirit Mei 💝🦊「 YOKOMESHI 」</div>
<div style="text-align: center; font-size: 14px;">@kitsune__spirit</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from KitsuneSpirit Mei 💝🦊「 YOKOMESHI 」.
| Data | KitsuneSpirit Mei 💝🦊「 YOKOMESHI 」 |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 67 |
| Short tweets | 820 |
| Tweets kept | 2361 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3uiy3sjw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kitsune__spirit's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1hdne87l) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1hdne87l/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/kitsune__spirit')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
hugginglearners/multi-object-classification
|
hugginglearners
| 2022-07-13T00:14:55Z | 0 | 2 |
fastai
|
[
"fastai",
"image-classification",
"region:us"
] |
image-classification
| 2022-07-04T04:34:10Z |
---
tags:
- fastai
- image-classification
---
## Model description
This repo contains the trained model for Multi-object classification
Full credits go to [Nhu Hoang](https://www.linkedin.com/in/nhu-hoang/)
Motivation: Classifying multiple objects is a challenging task without using an object detection algorithm. This model was trained on resnet34 backbone and achieved a good accuracy.
## Training and evaluation data
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| learning_rate | 3e-3 |
| training_precision | float16 |
|
andrewzhang505/quad-swarm-rl-1
|
andrewzhang505
| 2022-07-13T00:02:06Z | 5 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"region:us"
] |
reinforcement-learning
| 2022-07-12T21:09:52Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
---
A(n) **APPO** model trained on the **quadrotor_multi** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
AntiSquid/Reinforce-model-666
|
AntiSquid
| 2022-07-12T21:52:02Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-12T21:51:51Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-model-666
results:
- metrics:
- type: mean_reward
value: 117.10 +/- 4.85
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Shaier/medqa_fine_tuned_generic_bert
|
Shaier
| 2022-07-12T20:33:17Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"multiple-choice",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-07-12T19:49:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: medqa_fine_tuned_generic_bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medqa_fine_tuned_generic_bert
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4239
- Accuracy: 0.2869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 1.3851 | 0.2594 |
| 1.3896 | 2.0 | 636 | 1.3805 | 0.2807 |
| 1.3896 | 3.0 | 954 | 1.3852 | 0.2948 |
| 1.3629 | 4.0 | 1272 | 1.3996 | 0.2980 |
| 1.3068 | 5.0 | 1590 | 1.4239 | 0.2869 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.11.0
|
jakka/ppo-LunarLander-v2
|
jakka
| 2022-07-12T20:23:19Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-12T20:22:41Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 152.92 +/- 80.15
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
didi27/bloom-edu
|
didi27
| 2022-07-12T17:57:21Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2022-07-12T17:57:16Z |
---
license: bigscience-bloom-rail-1.0
---
|
huggingtweets/masonhaggerty
|
huggingtweets
| 2022-07-12T17:17:06Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-12T16:48:40Z |
---
language: en
thumbnail: http://www.huggingtweets.com/masonhaggerty/1657646221015/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1410026132121047041/LiYev7vQ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mason Haggerty</div>
<div style="text-align: center; font-size: 14px;">@masonhaggerty</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Mason Haggerty.
| Data | Mason Haggerty |
| --- | --- |
| Tweets downloaded | 785 |
| Retweets | 71 |
| Short tweets | 82 |
| Tweets kept | 632 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/jpav9nmg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @masonhaggerty's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/bs6k2tzz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/bs6k2tzz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/masonhaggerty')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Li-Tang/rare-puppers
|
Li-Tang
| 2022-07-12T16:57:55Z | 54 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-12T16:57:42Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9701492786407471
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu

|
zluvolyote/s288cExpressionPrediction_k6
|
zluvolyote
| 2022-07-12T16:54:43Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-12T16:02:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: s288cExpressionPrediction_k6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# s288cExpressionPrediction_k6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4418
- Accuracy: 0.8067
- F1: 0.7882
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 58 | 0.5315 | 0.7278 | 0.7572 |
| No log | 2.0 | 116 | 0.4604 | 0.7853 | 0.7841 |
| No log | 3.0 | 174 | 0.4418 | 0.8067 | 0.7882 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
reachrkr/TEST2ppo-LunarLander-v2
|
reachrkr
| 2022-07-12T16:20:36Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-12T16:20:08Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 266.96 +/- 25.94
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
andy-0v0/orcs-and-friends
|
andy-0v0
| 2022-07-12T16:03:57Z | 53 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-12T15:50:36Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: orcs-and-friends
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.522522509098053
---
# orcs-and-friends
Five-way classifier for orcs and their friends
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### goblin

#### gremlin

#### ogre

#### orc

#### troll

|
MarLac/wav2vec2-base-timit-demo-google-colab
|
MarLac
| 2022-07-12T15:41:51Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-12T08:24:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5816
- Wer: 0.3533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.243 | 0.5 | 500 | 1.0798 | 0.7752 |
| 0.834 | 1.01 | 1000 | 0.6206 | 0.5955 |
| 0.5503 | 1.51 | 1500 | 0.5387 | 0.5155 |
| 0.4548 | 2.01 | 2000 | 0.4660 | 0.4763 |
| 0.3412 | 2.51 | 2500 | 0.8381 | 0.4836 |
| 0.3128 | 3.02 | 3000 | 0.4818 | 0.4519 |
| 0.2547 | 3.52 | 3500 | 0.4415 | 0.4230 |
| 0.2529 | 4.02 | 4000 | 0.4624 | 0.4219 |
| 0.2103 | 4.52 | 4500 | 0.4714 | 0.4096 |
| 0.2102 | 5.03 | 5000 | 0.4968 | 0.4087 |
| 0.1838 | 5.53 | 5500 | 0.4643 | 0.4131 |
| 0.1721 | 6.03 | 6000 | 0.4676 | 0.3979 |
| 0.1548 | 6.53 | 6500 | 0.4765 | 0.4085 |
| 0.1595 | 7.04 | 7000 | 0.4797 | 0.3941 |
| 0.1399 | 7.54 | 7500 | 0.4753 | 0.3902 |
| 0.1368 | 8.04 | 8000 | 0.4697 | 0.3945 |
| 0.1276 | 8.54 | 8500 | 0.5438 | 0.3869 |
| 0.1255 | 9.05 | 9000 | 0.5660 | 0.3841 |
| 0.1077 | 9.55 | 9500 | 0.4964 | 0.3947 |
| 0.1197 | 10.05 | 10000 | 0.5349 | 0.3849 |
| 0.1014 | 10.55 | 10500 | 0.5558 | 0.3883 |
| 0.0949 | 11.06 | 11000 | 0.5673 | 0.3785 |
| 0.0882 | 11.56 | 11500 | 0.5589 | 0.3955 |
| 0.0906 | 12.06 | 12000 | 0.5752 | 0.4120 |
| 0.1064 | 12.56 | 12500 | 0.5080 | 0.3727 |
| 0.0854 | 13.07 | 13000 | 0.5398 | 0.3798 |
| 0.0754 | 13.57 | 13500 | 0.5237 | 0.3816 |
| 0.0791 | 14.07 | 14000 | 0.4967 | 0.3725 |
| 0.0731 | 14.57 | 14500 | 0.5287 | 0.3744 |
| 0.0719 | 15.08 | 15000 | 0.5633 | 0.3596 |
| 0.062 | 15.58 | 15500 | 0.5399 | 0.3752 |
| 0.0681 | 16.08 | 16000 | 0.5151 | 0.3759 |
| 0.0559 | 16.58 | 16500 | 0.5564 | 0.3709 |
| 0.0533 | 17.09 | 17000 | 0.5933 | 0.3743 |
| 0.0563 | 17.59 | 17500 | 0.5381 | 0.3670 |
| 0.0527 | 18.09 | 18000 | 0.5685 | 0.3731 |
| 0.0492 | 18.59 | 18500 | 0.5728 | 0.3725 |
| 0.0509 | 19.1 | 19000 | 0.6074 | 0.3807 |
| 0.0436 | 19.6 | 19500 | 0.5762 | 0.3628 |
| 0.0434 | 20.1 | 20000 | 0.6721 | 0.3729 |
| 0.0416 | 20.6 | 20500 | 0.5842 | 0.3700 |
| 0.0431 | 21.11 | 21000 | 0.5374 | 0.3607 |
| 0.037 | 21.61 | 21500 | 0.5556 | 0.3667 |
| 0.036 | 22.11 | 22000 | 0.5608 | 0.3592 |
| 0.04 | 22.61 | 22500 | 0.5272 | 0.3637 |
| 0.047 | 23.12 | 23000 | 0.5234 | 0.3625 |
| 0.0506 | 23.62 | 23500 | 0.5427 | 0.3629 |
| 0.0418 | 24.12 | 24000 | 0.5590 | 0.3626 |
| 0.037 | 24.62 | 24500 | 0.5615 | 0.3555 |
| 0.0429 | 25.13 | 25000 | 0.5806 | 0.3616 |
| 0.045 | 25.63 | 25500 | 0.5777 | 0.3639 |
| 0.0283 | 26.13 | 26000 | 0.5987 | 0.3617 |
| 0.0253 | 26.63 | 26500 | 0.5671 | 0.3551 |
| 0.032 | 27.14 | 27000 | 0.5464 | 0.3582 |
| 0.0321 | 27.64 | 27500 | 0.5634 | 0.3573 |
| 0.0274 | 28.14 | 28000 | 0.5513 | 0.3575 |
| 0.0245 | 28.64 | 28500 | 0.5745 | 0.3537 |
| 0.0251 | 29.15 | 29000 | 0.5759 | 0.3547 |
| 0.0222 | 29.65 | 29500 | 0.5816 | 0.3533 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
andreaschandra/xlm-roberta-base-finetuned-panx-en
|
andreaschandra
| 2022-07-12T15:39:20Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-12T15:35:21Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6774373259052925
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3932
- F1: 0.6774
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0236 | 1.0 | 50 | 0.5462 | 0.5109 |
| 0.5047 | 2.0 | 100 | 0.4387 | 0.6370 |
| 0.3716 | 3.0 | 150 | 0.3932 | 0.6774 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
zluvolyote/CUBERT
|
zluvolyote
| 2022-07-12T15:09:51Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-15T18:09:44Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: CUBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CUBERT
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.2203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 58 | 5.5281 |
| No log | 2.0 | 116 | 5.2508 |
| No log | 3.0 | 174 | 5.2203 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.1
- Tokenizers 0.12.1
|
huggingtweets/scottduncanwx
|
huggingtweets
| 2022-07-12T14:43:36Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-12T14:37:59Z |
---
language: en
thumbnail: http://www.huggingtweets.com/scottduncanwx/1657637010818/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1535379125296418821/ntSMv4LC_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Scott Duncan</div>
<div style="text-align: center; font-size: 14px;">@scottduncanwx</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Scott Duncan.
| Data | Scott Duncan |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 186 |
| Short tweets | 223 |
| Tweets kept | 2841 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/tziokng8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @scottduncanwx's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2swonujn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2swonujn/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/scottduncanwx')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/piotrikonowicz1
|
huggingtweets
| 2022-07-12T14:00:31Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-12T14:00:22Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/770622589664460802/bgUHfTNZ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Piotr Ikonowicz</div>
<div style="text-align: center; font-size: 14px;">@piotrikonowicz1</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Piotr Ikonowicz.
| Data | Piotr Ikonowicz |
| --- | --- |
| Tweets downloaded | 133 |
| Retweets | 3 |
| Short tweets | 13 |
| Tweets kept | 117 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/156jwrd1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @piotrikonowicz1's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/w029u281) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/w029u281/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/piotrikonowicz1')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
workRL/TEST2ppo-CarRacing-v0
|
workRL
| 2022-07-12T13:31:15Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"CarRacing-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-12T13:29:34Z |
---
library_name: stable-baselines3
tags:
- CarRacing-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -69.53 +/- 1.56
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CarRacing-v0
type: CarRacing-v0
---
# **PPO** Agent playing **CarRacing-v0**
This is a trained model of a **PPO** agent playing **CarRacing-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hugginglearners/rice_image_classification
|
hugginglearners
| 2022-07-12T13:27:14Z | 0 | 0 |
fastai
|
[
"fastai",
"image-classification",
"region:us"
] |
image-classification
| 2022-07-09T06:03:15Z |
---
tags:
- fastai
- image-classification
---
## Model description
This repo contains the trained model for rice image classification
Full credits go to [Vu Minh Chien](https://www.linkedin.com/in/vumichien/)
Motivation: Rice, which is among the most widely produced grain products worldwide, has many genetic varieties. These varieties are separated from each other due to some of their features. These usually feature such as texture, shape, and color. With these features that distinguish rice varieties, it is possible to classify and evaluate the quality of seeds.
## Intended uses & limitations
In this repo, Arborio, Basmati, Ipsala, Jasmine, and Karacadag, which are five different varieties of rice often grown in Turkey, were used. A total of 75,000-grain images, 15,000 from each of these varieties, are included in the dataset.
## Training and evaluation data
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| learning_rate | 3e-4 |
| freeze_epochs| 3 |
| unfreeze_epochs| 10|
| training_precision | float16 |
|
ymcnabb/finetuning-sentiment-model
|
ymcnabb
| 2022-07-12T13:17:58Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-12T12:24:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.8758169934640523
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3291
- Accuracy: 0.8733
- F1: 0.8758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
suc155/distilbert-base-uncased-finetuned-sst2
|
suc155
| 2022-07-12T12:43:16Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-12T12:22:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9151376146788991
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3056
- Accuracy: 0.9151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1827 | 1.0 | 4210 | 0.3056 | 0.9151 |
| 0.1235 | 2.0 | 8420 | 0.3575 | 0.9071 |
| 0.1009 | 3.0 | 12630 | 0.3896 | 0.9071 |
| 0.0561 | 4.0 | 16840 | 0.4810 | 0.9060 |
| 0.0406 | 5.0 | 21050 | 0.5375 | 0.9048 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Nonzerophilip/bert-finetuned-ner_swedish_small_set_health_and_standart
|
Nonzerophilip
| 2022-07-12T12:42:31Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-19T09:36:49Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner_swedish_small_set_health_and_standart
results: []
---
# Named Entity Recognition model for swedish
This model is a fine-tuned version of [KBLab/bert-base-swedish-cased-ner](https://huggingface.co/KBLab/bert-base-swedish-cased-ner)for only Swedish. It has been fine-tuned on the concatenation of a smaller version of SUC 3.0 and some medical text from the Swedish website 1177.
The model will predict the following entities:
| Tag | Name | Exampel |
|:-------------:|:-----:|:----:|
| PER |Person | (e.g., Johan and Sofia) |
| LOC | Location | (e.g., Göteborg and Spanien) |
| ORG | Organisation | (e.g., Volvo and Skatteverket) \ |
| PHARMA_DRUGS | Medication | (e.g., Paracetamol and Omeprazol)|
| HEALTH | Illness/Diseases | (e.g., Cancer, sjuk and diabetes) |
| Relation | Family members | (e.g., Mamma and Farmor) |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner_swedish_small_set_health_and_standart
It achieves the following results on the evaluation set:
- Loss: 0.0963
- Precision: 0.7548
- Recall: 0.7811
- F1: 0.7677
- Accuracy: 0.9756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 219 | 0.1123 | 0.7674 | 0.6567 | 0.7078 | 0.9681 |
| No log | 2.0 | 438 | 0.0934 | 0.7643 | 0.7662 | 0.7652 | 0.9738 |
| 0.1382 | 3.0 | 657 | 0.0963 | 0.7548 | 0.7811 | 0.7677 | 0.9756 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.7.1
- Datasets 2.2.2
- Tokenizers 0.12.1
|
mohammedbriman/t5-small-finetuned-cnn-dm-test
|
mohammedbriman
| 2022-07-12T12:38:05Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-12T09:51:25Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: t5-small-finetuned-cnn-dm-test
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnn-dm-test
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.4521
- Validation Loss: 2.1296
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 408096, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.4521 | 2.1296 | 0 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
xyma/PROP-marco-step400k
|
xyma
| 2022-07-12T11:53:02Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"PROP",
"Pretrain4IR",
"en",
"dataset:msmarco",
"arxiv:2010.10137",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-07-12T09:06:57Z |
---
language: en
tags:
- PROP
- Pretrain4IR
license: apache-2.0
datasets:
- msmarco
---
# PROP-marco-step400k
**PROP**, **P**re-training with **R**epresentative w**O**rds **P**rediction, is a new pre-training method tailored for ad-hoc retrieval. PROP is inspired by the classical statistical language model for IR, specifically the query likelihood model, which assumes that the query is generated as the piece of text representative of the “ideal” document. Based on this idea, we construct the representative words prediction (ROP) task for pre-training. The full paper can be found [here](https://arxiv.org/pdf/2010.10137.pdf).
This model is pre-trained with more steps than [PROP-marco](https://huggingface.co/xyma/PROP-marco) on MS MARCO document corpus, and used at the MS MARCO Document Ranking Leaderboard where we reached 1st place.
# Citation
If you find our work useful, please consider citing our paper:
```bibtex
@inproceedings{DBLP:conf/wsdm/MaGZFJC21,
author = {Xinyu Ma and
Jiafeng Guo and
Ruqing Zhang and
Yixing Fan and
Xiang Ji and
Xueqi Cheng},
editor = {Liane Lewin{-}Eytan and
David Carmel and
Elad Yom{-}Tov and
Eugene Agichtein and
Evgeniy Gabrilovich},
title = {{PROP:} Pre-training with Representative Words Prediction for Ad-hoc
Retrieval},
booktitle = {{WSDM} '21, The Fourteenth {ACM} International Conference on Web Search
and Data Mining, Virtual Event, Israel, March 8-12, 2021},
pages = {283--291},
publisher = {{ACM}},
year = {2021},
url = {https://doi.org/10.1145/3437963.3441777},
doi = {10.1145/3437963.3441777},
timestamp = {Wed, 07 Apr 2021 16:17:44 +0200},
biburl = {https://dblp.org/rec/conf/wsdm/MaGZFJC21.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
dungeoun/pos_neg_neu_tweet_BERT
|
dungeoun
| 2022-07-12T11:08:00Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-07-12T06:22:25Z |
---
license: apache-2.0
pipeline-tag: text-classification
---
This repository contains a fine-tuned BERT model trained on tweets of categories Positive, Negative, and Neutral sentiments.
|
MiguelCosta/finetuning-sentiment-model-24000-samples
|
MiguelCosta
| 2022-07-12T10:48:14Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-12T06:17:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-24000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9266666666666666
- name: F1
type: f1
value: 0.9273927392739274
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-24000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3505
- Accuracy: 0.9267
- F1: 0.9274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
luke-thorburn/suggest-conclusion-full-finetune
|
luke-thorburn
| 2022-07-12T10:02:48Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"argumentation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate the conclusion of an argument
This model is a version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), where all parameters (both weights and biases) have been finetuned on the task of generating the conclusion of an argument given its premises. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
Consider the facts:
* [premise 1]
* [premise 2]
...
* [premise n]
We must conclude that: [generated conclusion]
```
# Dataset
The parameters were finetuned using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia.
|
luke-thorburn/suggest-intermediary-claims-full-finetune
|
luke-thorburn
| 2022-07-12T09:56:47Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"argumentation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate a chain of reasoning from one claim to another
This model is a version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), where all parameters (both weights and biases) have been finetuned on the task of generating a sequence of claims (a 'chain of reasoning') that joins one claim to another. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
Input: [start claim] -> [end claim]
Output: [start claim] -> [generated intermediate claim 1] -> ... -> [generated intermediate claim n] -> [end claim]
```
# Dataset
The parameters were finetuned using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia.
|
luke-thorburn/suggest-conclusion-soft
|
luke-thorburn
| 2022-07-12T09:43:47Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"argumentation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate the conclusion of an argument
This model has the same model parameters as [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), but with an additional soft prompt which has been optimized on the task of generating the conclusion of an argument given its premises. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
[prepended soft prompt]- [premise 1]
- [premise 2]
...
- [premise n]
Conclusion: [generated conclusion]
```
# Dataset
The soft prompt was trained using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia.
|
luke-thorburn/suggest-objections-soft
|
luke-thorburn
| 2022-07-12T09:43:28Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"argumentation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate objections to a claim
This model has the same model parameters as [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), but with an additional soft prompt which has been optimized on the task of generating the objections to a claim, optionally given some example objections to that claim. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
[prepended soft prompt][original claim]
Cons:
- [objection 1]
- [objection 2]
...
- [objection n]
- [generated objection]
```
# Dataset
The soft prompt was trained using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia.
|
fxmarty/20220712-h08m05s32_
|
fxmarty
| 2022-07-12T08:05:37Z | 0 | 0 | null |
[
"tensorboard",
"vit",
"image-classification",
"dataset:beans",
"region:us"
] |
image-classification
| 2022-07-12T08:05:32Z |
---
pipeline_tag: image-classification
datasets:
- beans
metrics:
- accuracy
tags:
- vit
---
**task**: `image-classification`
**Backend:** `sagemaker-training`
**Backend args:** `{'instance_type': 'ml.g4dn.2xlarge', 'supported_instructions': None}`
**Number of evaluation samples:** `All dataset`
Fixed parameters:
* **model_name_or_path**: `nateraw/vit-base-beans`
* **dataset**:
* **path**: `beans`
* **eval_split**: `validation`
* **data_keys**: `{'primary': 'image'}`
* **ref_keys**: `['labels']`
* **quantization_approach**: `dynamic`
* **node_exclusion**: `[]`
* **framework**: `onnxruntime`
* **framework_args**:
* **opset**: `11`
* **optimization_level**: `1`
* **aware_training**: `False`
Benchmarked parameters:
* **operators_to_quantize**: `['Add', 'MatMul']`, `['Add']`, `[]`
* **per_channel**: `False`, `True`
# Evaluation
## Non-time metrics
| operators_to_quantize | per_channel | | accuracy (original) | accuracy (optimized) |
| :-------------------: | :---------: | :-: | :-----------------: | :------------------: |
| `['Add', 'MatMul']` | `False` | \| | 0.980 | 0.980 |
| `['Add', 'MatMul']` | `True` | \| | 0.980 | 0.980 |
| `['Add']` | `False` | \| | 0.980 | 0.980 |
| `['Add']` | `True` | \| | 0.980 | 0.980 |
| `[]` | `False` | \| | 0.980 | 0.980 |
| `[]` | `True` | \| | 0.980 | 0.980 |
## Time metrics
Time benchmarks were run for 15 seconds per config.
Below, time metrics for batch size = 1, input length = 32.
| operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `False` | \| | 201.25 | 70.30 | \| | 5.00 | 14.27 |
| `['Add', 'MatMul']` | `True` | \| | 203.52 | 72.48 | \| | 4.93 | 13.80 |
| `['Add']` | `False` | \| | 166.03 | 150.93 | \| | 6.07 | 6.67 |
| `['Add']` | `True` | \| | 200.82 | 163.17 | \| | 5.00 | 6.13 |
| `[]` | `False` | \| | 190.99 | 162.06 | \| | 5.27 | 6.20 |
| `[]` | `True` | \| | 155.15 | 162.52 | \| | 6.47 | 6.20 |
Below, time metrics for batch size = 1, input length = 64.
| operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `False` | \| | 165.85 | 70.60 | \| | 6.07 | 14.20 |
| `['Add', 'MatMul']` | `True` | \| | 161.41 | 72.71 | \| | 6.20 | 13.80 |
| `['Add']` | `False` | \| | 200.45 | 129.40 | \| | 5.00 | 7.73 |
| `['Add']` | `True` | \| | 154.68 | 136.42 | \| | 6.47 | 7.40 |
| `[]` | `False` | \| | 166.97 | 162.15 | \| | 6.00 | 6.20 |
| `[]` | `True` | \| | 166.32 | 162.81 | \| | 6.07 | 6.20 |
Below, time metrics for batch size = 1, input length = 128.
| operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `False` | \| | 199.48 | 70.98 | \| | 5.07 | 14.13 |
| `['Add', 'MatMul']` | `True` | \| | 199.65 | 71.78 | \| | 5.07 | 13.93 |
| `['Add']` | `False` | \| | 199.08 | 137.97 | \| | 5.07 | 7.27 |
| `['Add']` | `True` | \| | 189.93 | 162.45 | \| | 5.33 | 6.20 |
| `[]` | `False` | \| | 191.63 | 162.54 | \| | 5.27 | 6.20 |
| `[]` | `True` | \| | 200.38 | 162.55 | \| | 5.00 | 6.20 |
Below, time metrics for batch size = 4, input length = 32.
| operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `False` | \| | 655.84 | 243.33 | \| | 1.53 | 4.13 |
| `['Add', 'MatMul']` | `True` | \| | 661.27 | 221.16 | \| | 1.53 | 4.53 |
| `['Add']` | `False` | \| | 662.84 | 529.28 | \| | 1.53 | 1.93 |
| `['Add']` | `True` | \| | 512.47 | 470.66 | \| | 2.00 | 2.13 |
| `[]` | `False` | \| | 562.81 | 501.77 | \| | 1.80 | 2.00 |
| `[]` | `True` | \| | 505.81 | 521.20 | \| | 2.00 | 1.93 |
Below, time metrics for batch size = 4, input length = 64.
| operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `False` | \| | 654.58 | 258.54 | \| | 1.53 | 3.93 |
| `['Add', 'MatMul']` | `True` | \| | 617.44 | 234.05 | \| | 1.67 | 4.33 |
| `['Add']` | `False` | \| | 661.51 | 478.81 | \| | 1.53 | 2.13 |
| `['Add']` | `True` | \| | 657.01 | 660.23 | \| | 1.53 | 1.53 |
| `[]` | `False` | \| | 661.64 | 474.28 | \| | 1.53 | 2.13 |
| `[]` | `True` | \| | 661.29 | 471.09 | \| | 1.53 | 2.13 |
Below, time metrics for batch size = 4, input length = 128.
| operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `False` | \| | 654.80 | 219.38 | \| | 1.53 | 4.60 |
| `['Add', 'MatMul']` | `True` | \| | 663.50 | 222.37 | \| | 1.53 | 4.53 |
| `['Add']` | `False` | \| | 625.56 | 529.02 | \| | 1.60 | 1.93 |
| `['Add']` | `True` | \| | 655.08 | 499.41 | \| | 1.53 | 2.07 |
| `[]` | `False` | \| | 655.92 | 473.01 | \| | 1.53 | 2.13 |
| `[]` | `True` | \| | 505.54 | 659.92 | \| | 2.00 | 1.53 |
Below, time metrics for batch size = 8, input length = 32.
| operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `False` | \| | 968.83 | 443.80 | \| | 1.07 | 2.27 |
| `['Add', 'MatMul']` | `True` | \| | 1255.70 | 489.55 | \| | 0.80 | 2.07 |
| `['Add']` | `False` | \| | 1301.35 | 938.14 | \| | 0.80 | 1.07 |
| `['Add']` | `True` | \| | 1279.54 | 931.91 | \| | 0.80 | 1.13 |
| `[]` | `False` | \| | 1292.66 | 1318.07 | \| | 0.80 | 0.80 |
| `[]` | `True` | \| | 1290.35 | 1314.74 | \| | 0.80 | 0.80 |
Below, time metrics for batch size = 8, input length = 64.
| operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `False` | \| | 1305.45 | 438.06 | \| | 0.80 | 2.33 |
| `['Add', 'MatMul']` | `True` | \| | 1296.68 | 450.40 | \| | 0.80 | 2.27 |
| `['Add']` | `False` | \| | 968.21 | 949.81 | \| | 1.07 | 1.07 |
| `['Add']` | `True` | \| | 1012.35 | 1317.46 | \| | 1.00 | 0.80 |
| `[]` | `False` | \| | 1213.91 | 961.79 | \| | 0.87 | 1.07 |
| `[]` | `True` | \| | 956.39 | 945.41 | \| | 1.07 | 1.07 |
Below, time metrics for batch size = 8, input length = 128.
| operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `False` | \| | 1120.12 | 497.17 | \| | 0.93 | 2.07 |
| `['Add', 'MatMul']` | `True` | \| | 1289.50 | 443.46 | \| | 0.80 | 2.27 |
| `['Add']` | `False` | \| | 1294.65 | 930.97 | \| | 0.80 | 1.13 |
| `['Add']` | `True` | \| | 1181.21 | 933.82 | \| | 0.87 | 1.13 |
| `[]` | `False` | \| | 1245.61 | 1318.07 | \| | 0.87 | 0.80 |
| `[]` | `True` | \| | 1285.81 | 1318.82 | \| | 0.80 | 0.80 |
|
ArneD/xlm-roberta-base-finetuned-panx-all
|
ArneD
| 2022-07-12T07:50:58Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-12T06:47:20Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset (EN, FR, DE, IT).
It achieves the following results on the evaluation set:
- Loss: 0.1769
- F1: 0.8535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2934 | 1.0 | 835 | 0.1853 | 0.8250 |
| 0.1569 | 2.0 | 1670 | 0.1714 | 0.8438 |
| 0.1008 | 3.0 | 2505 | 0.1769 | 0.8535 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
fxmarty/20220712-h07m20s32_example_conll2003
|
fxmarty
| 2022-07-12T07:20:37Z | 0 | 0 | null |
[
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"region:us"
] |
token-classification
| 2022-07-12T07:20:32Z |
---
pipeline_tag: token-classification
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
tags:
- distilbert
---
**task**: `token-classification`
**Backend:** `sagemaker-training`
**Backend args:** `{'instance_type': 'ml.g4dn.2xlarge', 'supported_instructions': 'avx512_vnni'}`
**Number of evaluation samples:** `1000`
Fixed parameters:
* **model_name_or_path**: `elastic/distilbert-base-uncased-finetuned-conll03-english`
* **dataset**:
* **path**: `conll2003`
* **eval_split**: `validation`
* **data_keys**: `{'primary': 'tokens'}`
* **ref_keys**: `['ner_tags']`
* **calibration_split**: `train`
* **node_exclusion**: `[]`
* **per_channel**: `False`
* **calibration**:
* **method**: `minmax`
* **num_calibration_samples**: `100`
* **framework**: `onnxruntime`
* **framework_args**:
* **opset**: `11`
* **optimization_level**: `1`
* **aware_training**: `False`
Benchmarked parameters:
* **quantization_approach**: `dynamic`, `static`
* **operators_to_quantize**: `['Add', 'MatMul']`, `['Add']`
# Evaluation
## Non-time metrics
| quantization_approach | operators_to_quantize | | precision (original) | precision (optimized) | | recall (original) | recall (optimized) | | f1 (original) | f1 (optimized) | | accuracy (original) | accuracy (optimized) |
| :-------------------: | :-------------------: | :-: | :------------------: | :-------------------: | :-: | :---------------: | :----------------: | :-: | :-----------: | :------------: | :-: | :-----------------: | :------------------: |
| `dynamic` | `['Add', 'MatMul']` | \| | 0.937 | 0.937 | \| | 0.953 | 0.953 | \| | 0.945 | 0.945 | \| | 0.988 | 0.988 |
| `dynamic` | `['Add']` | \| | 0.937 | 0.937 | \| | 0.953 | 0.953 | \| | 0.945 | 0.945 | \| | 0.988 | 0.988 |
| `static` | `['Add', 'MatMul']` | \| | 0.937 | 0.074 | \| | 0.953 | 0.253 | \| | 0.945 | 0.114 | \| | 0.988 | 0.363 |
| `static` | `['Add']` | \| | 0.937 | 0.065 | \| | 0.953 | 0.186 | \| | 0.945 | 0.096 | \| | 0.988 | 0.340 |
## Time metrics
Time benchmarks were run for 3 seconds per config.
Below, time metrics for batch size = 1, input length = 64.
| quantization_approach | operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `dynamic` | `['Add', 'MatMul']` | \| | 57.64 | 12.30 | \| | 17.67 | 81.33 |
| `dynamic` | `['Add']` | \| | 43.51 | 29.42 | \| | 23.00 | 34.00 |
| `static` | `['Add', 'MatMul']` | \| | 43.05 | 21.11 | \| | 23.33 | 47.67 |
| `static` | `['Add']` | \| | 43.50 | 37.93 | \| | 23.00 | 26.67 |
Below, time metrics for batch size = 4, input length = 64.
| quantization_approach | operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `dynamic` | `['Add', 'MatMul']` | \| | 119.50 | 39.92 | \| | 8.67 | 25.33 |
| `dynamic` | `['Add']` | \| | 119.62 | 107.42 | \| | 8.67 | 9.33 |
| `static` | `['Add', 'MatMul']` | \| | 120.23 | 56.94 | \| | 8.33 | 17.67 |
| `static` | `['Add']` | \| | 119.10 | 130.78 | \| | 8.67 | 7.67 |
Below, time metrics for batch size = 8, input length = 64.
| quantization_approach | operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `dynamic` | `['Add', 'MatMul']` | \| | 165.84 | 75.45 | \| | 6.33 | 13.33 |
| `dynamic` | `['Add']` | \| | 214.65 | 211.41 | \| | 4.67 | 5.00 |
| `static` | `['Add', 'MatMul']` | \| | 166.53 | 129.00 | \| | 6.33 | 8.00 |
| `static` | `['Add']` | \| | 214.81 | 256.95 | \| | 4.67 | 4.00 |
|
AntiSquid/TEST2ppo-LunarLander-v2
|
AntiSquid
| 2022-07-12T07:10:57Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-06T21:53:51Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 285.66 +/- 15.86
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MiguelCosta/finetuning-sentiment-model-3000-samples
|
MiguelCosta
| 2022-07-12T06:06:41Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-12T04:48:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.8810289389067525
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5805
- Accuracy: 0.8767
- F1: 0.8810
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
reecejocumsenbb/testfield-finetuned-imdb
|
reecejocumsenbb
| 2022-07-12T06:02:47Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-12T04:23:21Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: reecejocumsenbb/testfield-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# reecejocumsenbb/testfield-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.0451
- Validation Loss: 3.9664
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -993, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.0451 | 3.9664 | 0 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.9.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
paola-md/recipe-distilbert-i
|
paola-md
| 2022-07-12T05:14:44Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-12T04:54:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-distilbert-i
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-distilbert-i
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0288
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3931 | 1.0 | 152 | 1.7738 |
| 1.7533 | 2.0 | 304 | 1.5109 |
| 1.5584 | 3.0 | 456 | 1.4003 |
| 1.443 | 4.0 | 608 | 1.3296 |
| 1.3551 | 5.0 | 760 | 1.2270 |
| 1.2981 | 6.0 | 912 | 1.1870 |
| 1.2577 | 7.0 | 1064 | 1.1511 |
| 1.2216 | 8.0 | 1216 | 1.1298 |
| 1.1958 | 9.0 | 1368 | 1.1087 |
| 1.1685 | 10.0 | 1520 | 1.0858 |
| 1.1533 | 11.0 | 1672 | 1.0820 |
| 1.1358 | 12.0 | 1824 | 1.0659 |
| 1.1286 | 13.0 | 1976 | 1.0382 |
| 1.1128 | 14.0 | 2128 | 1.0468 |
| 1.11 | 15.0 | 2280 | 1.0399 |
| 1.094 | 16.0 | 2432 | 1.0382 |
| 1.0969 | 17.0 | 2584 | 1.0096 |
| 1.0868 | 18.0 | 2736 | 1.0235 |
| 1.0845 | 19.0 | 2888 | 1.0227 |
| 1.0855 | 20.0 | 3040 | 1.0288 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Evelyn18/legalectra-small-spanish-becasv3-6
|
Evelyn18
| 2022-07-12T05:05:14Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:becasv2",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-07-12T04:49:13Z |
---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: legalectra-small-spanish-becasv3-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legalectra-small-spanish-becasv3-6
This model is a fine-tuned version of [mrm8488/legalectra-small-spanish](https://huggingface.co/mrm8488/legalectra-small-spanish) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8441
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 5.6469 |
| No log | 2.0 | 10 | 5.5104 |
| No log | 3.0 | 15 | 5.4071 |
| No log | 4.0 | 20 | 5.3313 |
| No log | 5.0 | 25 | 5.2629 |
| No log | 6.0 | 30 | 5.1972 |
| No log | 7.0 | 35 | 5.1336 |
| No log | 8.0 | 40 | 5.0667 |
| No log | 9.0 | 45 | 5.0030 |
| No log | 10.0 | 50 | 4.9302 |
| No log | 11.0 | 55 | 4.8646 |
| No log | 12.0 | 60 | 4.7963 |
| No log | 13.0 | 65 | 4.7328 |
| No log | 14.0 | 70 | 4.6735 |
| No log | 15.0 | 75 | 4.6258 |
| No log | 16.0 | 80 | 4.5869 |
| No log | 17.0 | 85 | 4.5528 |
| No log | 18.0 | 90 | 4.5177 |
| No log | 19.0 | 95 | 4.4916 |
| No log | 20.0 | 100 | 4.4685 |
| No log | 21.0 | 105 | 4.4371 |
| No log | 22.0 | 110 | 4.4271 |
| No log | 23.0 | 115 | 4.3905 |
| No log | 24.0 | 120 | 4.3931 |
| No log | 25.0 | 125 | 4.3902 |
| No log | 26.0 | 130 | 4.3772 |
| No log | 27.0 | 135 | 4.3981 |
| No log | 28.0 | 140 | 4.4463 |
| No log | 29.0 | 145 | 4.4501 |
| No log | 30.0 | 150 | 4.4654 |
| No log | 31.0 | 155 | 4.4069 |
| No log | 32.0 | 160 | 4.4108 |
| No log | 33.0 | 165 | 4.4394 |
| No log | 34.0 | 170 | 4.4320 |
| No log | 35.0 | 175 | 4.3541 |
| No log | 36.0 | 180 | 4.4534 |
| No log | 37.0 | 185 | 4.2616 |
| No log | 38.0 | 190 | 4.2474 |
| No log | 39.0 | 195 | 4.4358 |
| No log | 40.0 | 200 | 4.3060 |
| No log | 41.0 | 205 | 4.1866 |
| No log | 42.0 | 210 | 4.2735 |
| No log | 43.0 | 215 | 4.2739 |
| No log | 44.0 | 220 | 4.1812 |
| No log | 45.0 | 225 | 4.2484 |
| No log | 46.0 | 230 | 4.3706 |
| No log | 47.0 | 235 | 4.3487 |
| No log | 48.0 | 240 | 4.2805 |
| No log | 49.0 | 245 | 4.3180 |
| No log | 50.0 | 250 | 4.3574 |
| No log | 51.0 | 255 | 4.2823 |
| No log | 52.0 | 260 | 4.0643 |
| No log | 53.0 | 265 | 4.0729 |
| No log | 54.0 | 270 | 4.2368 |
| No log | 55.0 | 275 | 4.2845 |
| No log | 56.0 | 280 | 4.1009 |
| No log | 57.0 | 285 | 4.0629 |
| No log | 58.0 | 290 | 4.1250 |
| No log | 59.0 | 295 | 4.2048 |
| No log | 60.0 | 300 | 4.2412 |
| No log | 61.0 | 305 | 4.1653 |
| No log | 62.0 | 310 | 4.1433 |
| No log | 63.0 | 315 | 4.1309 |
| No log | 64.0 | 320 | 4.1381 |
| No log | 65.0 | 325 | 4.2162 |
| No log | 66.0 | 330 | 4.1858 |
| No log | 67.0 | 335 | 4.1342 |
| No log | 68.0 | 340 | 4.1247 |
| No log | 69.0 | 345 | 4.1701 |
| No log | 70.0 | 350 | 4.1915 |
| No log | 71.0 | 355 | 4.1356 |
| No log | 72.0 | 360 | 4.1766 |
| No log | 73.0 | 365 | 4.1296 |
| No log | 74.0 | 370 | 4.0594 |
| No log | 75.0 | 375 | 4.0601 |
| No log | 76.0 | 380 | 4.0328 |
| No log | 77.0 | 385 | 3.9978 |
| No log | 78.0 | 390 | 4.0070 |
| No log | 79.0 | 395 | 4.0519 |
| No log | 80.0 | 400 | 4.1000 |
| No log | 81.0 | 405 | 3.9550 |
| No log | 82.0 | 410 | 3.9159 |
| No log | 83.0 | 415 | 3.9494 |
| No log | 84.0 | 420 | 4.0546 |
| No log | 85.0 | 425 | 4.2223 |
| No log | 86.0 | 430 | 4.2665 |
| No log | 87.0 | 435 | 3.8892 |
| No log | 88.0 | 440 | 3.7763 |
| No log | 89.0 | 445 | 3.8576 |
| No log | 90.0 | 450 | 4.0089 |
| No log | 91.0 | 455 | 4.1495 |
| No log | 92.0 | 460 | 4.1545 |
| No log | 93.0 | 465 | 4.0164 |
| No log | 94.0 | 470 | 3.9175 |
| No log | 95.0 | 475 | 3.9308 |
| No log | 96.0 | 480 | 3.9658 |
| No log | 97.0 | 485 | 3.9856 |
| No log | 98.0 | 490 | 3.9691 |
| No log | 99.0 | 495 | 3.9082 |
| 3.2873 | 100.0 | 500 | 3.8736 |
| 3.2873 | 101.0 | 505 | 3.8963 |
| 3.2873 | 102.0 | 510 | 3.9391 |
| 3.2873 | 103.0 | 515 | 3.9408 |
| 3.2873 | 104.0 | 520 | 3.9075 |
| 3.2873 | 105.0 | 525 | 3.8258 |
| 3.2873 | 106.0 | 530 | 3.7917 |
| 3.2873 | 107.0 | 535 | 3.7981 |
| 3.2873 | 108.0 | 540 | 3.8272 |
| 3.2873 | 109.0 | 545 | 3.8655 |
| 3.2873 | 110.0 | 550 | 3.8234 |
| 3.2873 | 111.0 | 555 | 3.7126 |
| 3.2873 | 112.0 | 560 | 3.6981 |
| 3.2873 | 113.0 | 565 | 3.7327 |
| 3.2873 | 114.0 | 570 | 3.8470 |
| 3.2873 | 115.0 | 575 | 4.0036 |
| 3.2873 | 116.0 | 580 | 4.0412 |
| 3.2873 | 117.0 | 585 | 4.0487 |
| 3.2873 | 118.0 | 590 | 4.0524 |
| 3.2873 | 119.0 | 595 | 4.0375 |
| 3.2873 | 120.0 | 600 | 3.9971 |
| 3.2873 | 121.0 | 605 | 3.8959 |
| 3.2873 | 122.0 | 610 | 3.8834 |
| 3.2873 | 123.0 | 615 | 3.9279 |
| 3.2873 | 124.0 | 620 | 3.9374 |
| 3.2873 | 125.0 | 625 | 3.9515 |
| 3.2873 | 126.0 | 630 | 3.9625 |
| 3.2873 | 127.0 | 635 | 3.9635 |
| 3.2873 | 128.0 | 640 | 3.9596 |
| 3.2873 | 129.0 | 645 | 3.8871 |
| 3.2873 | 130.0 | 650 | 3.8307 |
| 3.2873 | 131.0 | 655 | 3.8318 |
| 3.2873 | 132.0 | 660 | 3.8403 |
| 3.2873 | 133.0 | 665 | 3.8560 |
| 3.2873 | 134.0 | 670 | 3.8650 |
| 3.2873 | 135.0 | 675 | 3.8734 |
| 3.2873 | 136.0 | 680 | 3.8756 |
| 3.2873 | 137.0 | 685 | 3.8613 |
| 3.2873 | 138.0 | 690 | 3.8447 |
| 3.2873 | 139.0 | 695 | 3.8362 |
| 3.2873 | 140.0 | 700 | 3.8328 |
| 3.2873 | 141.0 | 705 | 3.8350 |
| 3.2873 | 142.0 | 710 | 3.8377 |
| 3.2873 | 143.0 | 715 | 3.8399 |
| 3.2873 | 144.0 | 720 | 3.8414 |
| 3.2873 | 145.0 | 725 | 3.8422 |
| 3.2873 | 146.0 | 730 | 3.8435 |
| 3.2873 | 147.0 | 735 | 3.8437 |
| 3.2873 | 148.0 | 740 | 3.8437 |
| 3.2873 | 149.0 | 745 | 3.8440 |
| 3.2873 | 150.0 | 750 | 3.8441 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
paola-md/recipe-distilbert-s
|
paola-md
| 2022-07-12T04:54:03Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-12T03:06:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-distilbert-s
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-distilbert-s
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0321
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.8594 | 1.0 | 844 | 1.4751 |
| 1.4763 | 2.0 | 1688 | 1.3282 |
| 1.3664 | 3.0 | 2532 | 1.2553 |
| 1.2975 | 4.0 | 3376 | 1.2093 |
| 1.2543 | 5.0 | 4220 | 1.1667 |
| 1.2189 | 6.0 | 5064 | 1.1472 |
| 1.1944 | 7.0 | 5908 | 1.1251 |
| 1.1737 | 8.0 | 6752 | 1.1018 |
| 1.1549 | 9.0 | 7596 | 1.0950 |
| 1.1387 | 10.0 | 8440 | 1.0796 |
| 1.1295 | 11.0 | 9284 | 1.0713 |
| 1.1166 | 12.0 | 10128 | 1.0639 |
| 1.1078 | 13.0 | 10972 | 1.0485 |
| 1.099 | 14.0 | 11816 | 1.0431 |
| 1.0951 | 15.0 | 12660 | 1.0425 |
| 1.0874 | 16.0 | 13504 | 1.0323 |
| 1.0828 | 17.0 | 14348 | 1.0368 |
| 1.0802 | 18.0 | 15192 | 1.0339 |
| 1.0798 | 19.0 | 16036 | 1.0247 |
| 1.0758 | 20.0 | 16880 | 1.0321 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Evelyn18/legalectra-small-spanish-becasv3-5
|
Evelyn18
| 2022-07-12T04:45:36Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:becasv2",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-07-12T04:43:31Z |
---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: legalectra-small-spanish-becasv3-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legalectra-small-spanish-becasv3-5
This model is a fine-tuned version of [mrm8488/legalectra-small-spanish](https://huggingface.co/mrm8488/legalectra-small-spanish) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7020
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 5.7715 |
| No log | 2.0 | 10 | 5.7001 |
| No log | 3.0 | 15 | 5.6206 |
| No log | 4.0 | 20 | 5.5463 |
| No log | 5.0 | 25 | 5.4866 |
| No log | 6.0 | 30 | 5.4369 |
| No log | 7.0 | 35 | 5.3939 |
| No log | 8.0 | 40 | 5.3545 |
| No log | 9.0 | 45 | 5.3168 |
| No log | 10.0 | 50 | 5.2824 |
| No log | 11.0 | 55 | 5.2504 |
| No log | 12.0 | 60 | 5.2193 |
| No log | 13.0 | 65 | 5.1864 |
| No log | 14.0 | 70 | 5.1515 |
| No log | 15.0 | 75 | 5.1174 |
| No log | 16.0 | 80 | 5.0839 |
| No log | 17.0 | 85 | 5.0497 |
| No log | 18.0 | 90 | 5.0188 |
| No log | 19.0 | 95 | 4.9937 |
| No log | 20.0 | 100 | 4.9726 |
| No log | 21.0 | 105 | 4.9483 |
| No log | 22.0 | 110 | 4.9205 |
| No log | 23.0 | 115 | 4.8993 |
| No log | 24.0 | 120 | 4.8802 |
| No log | 25.0 | 125 | 4.8612 |
| No log | 26.0 | 130 | 4.8498 |
| No log | 27.0 | 135 | 4.8294 |
| No log | 28.0 | 140 | 4.8176 |
| No log | 29.0 | 145 | 4.8144 |
| No log | 30.0 | 150 | 4.8012 |
| No log | 31.0 | 155 | 4.7890 |
| No log | 32.0 | 160 | 4.7745 |
| No log | 33.0 | 165 | 4.7641 |
| No log | 34.0 | 170 | 4.7558 |
| No log | 35.0 | 175 | 4.7474 |
| No log | 36.0 | 180 | 4.7384 |
| No log | 37.0 | 185 | 4.7319 |
| No log | 38.0 | 190 | 4.7262 |
| No log | 39.0 | 195 | 4.7225 |
| No log | 40.0 | 200 | 4.7201 |
| No log | 41.0 | 205 | 4.7165 |
| No log | 42.0 | 210 | 4.7129 |
| No log | 43.0 | 215 | 4.7111 |
| No log | 44.0 | 220 | 4.7086 |
| No log | 45.0 | 225 | 4.7060 |
| No log | 46.0 | 230 | 4.7049 |
| No log | 47.0 | 235 | 4.7036 |
| No log | 48.0 | 240 | 4.7028 |
| No log | 49.0 | 245 | 4.7023 |
| No log | 50.0 | 250 | 4.7020 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Evelyn18/legalectra-small-spanish-becasv3-1
|
Evelyn18
| 2022-07-12T03:54:49Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:becasv2",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-07-12T03:49:49Z |
---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: legalectra-small-spanish-becasv3-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legalectra-small-spanish-becasv3-1
This model is a fine-tuned version of [mrm8488/legalectra-small-spanish](https://huggingface.co/mrm8488/legalectra-small-spanish) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 5.5694
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 8 | 5.8980 |
| No log | 2.0 | 16 | 5.8136 |
| No log | 3.0 | 24 | 5.7452 |
| No log | 4.0 | 32 | 5.6940 |
| No log | 5.0 | 40 | 5.6554 |
| No log | 6.0 | 48 | 5.6241 |
| No log | 7.0 | 56 | 5.5997 |
| No log | 8.0 | 64 | 5.5830 |
| No log | 9.0 | 72 | 5.5730 |
| No log | 10.0 | 80 | 5.5694 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
paola-md/recipe-distilbert-upper-Is
|
paola-md
| 2022-07-12T03:03:14Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-12T00:16:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-distilbert-upper-Is
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-distilbert-upper-Is
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.6309 | 1.0 | 1305 | 1.2607 |
| 1.2639 | 2.0 | 2610 | 1.1291 |
| 1.1592 | 3.0 | 3915 | 1.0605 |
| 1.0987 | 4.0 | 5220 | 1.0128 |
| 1.0569 | 5.0 | 6525 | 0.9796 |
| 1.0262 | 6.0 | 7830 | 0.9592 |
| 1.0032 | 7.0 | 9135 | 0.9352 |
| 0.9815 | 8.0 | 10440 | 0.9186 |
| 0.967 | 9.0 | 11745 | 0.9086 |
| 0.9532 | 10.0 | 13050 | 0.8973 |
| 0.9436 | 11.0 | 14355 | 0.8888 |
| 0.9318 | 12.0 | 15660 | 0.8835 |
| 0.9243 | 13.0 | 16965 | 0.8748 |
| 0.9169 | 14.0 | 18270 | 0.8673 |
| 0.9117 | 15.0 | 19575 | 0.8610 |
| 0.9066 | 16.0 | 20880 | 0.8562 |
| 0.9028 | 17.0 | 22185 | 0.8566 |
| 0.901 | 18.0 | 23490 | 0.8583 |
| 0.8988 | 19.0 | 24795 | 0.8557 |
| 0.8958 | 20.0 | 26100 | 0.8565 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
nateraw/yolov6t
|
nateraw
| 2022-07-12T02:01:04Z | 0 | 0 |
pytorch
|
[
"pytorch",
"object-detection",
"yolo",
"autogenerated-modelcard",
"en",
"arxiv:1910.09700",
"license:gpl-3.0",
"region:us"
] |
object-detection
| 2022-07-08T04:19:38Z |
---
language: en
license: gpl-3.0
library_name: pytorch
tags:
- object-detection
- yolo
- autogenerated-modelcard
model_name: yolov6t
---
# Model Card for yolov6t
<!-- Provide a quick summary of what the model is/does. -->
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Model Examination](#model-examination)
7. [Environmental Impact](#environmental-impact)
8. [Technical Specifications](#technical-specifications-optional)
9. [Citation](#citation)
10. [Glossary](#glossary-optional)
11. [More Information](#more-information-optional)
12. [Model Card Authors](#model-card-authors-optional)
13. [Model Card Contact](#model-card-contact)
14. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
YOLOv6 is a single-stage object detection framework dedicated to industrial applications, with hardware-friendly efficient design and high performance.
- **Developed by:** [More Information Needed]
- **Shared by [Optional]:** [@nateraw](https://hf.co/nateraw)
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Related Models:** [yolov6s](https://hf.co/nateraw/yolov6s), [yolov6n](https://hf.co/nateraw/yolov6n)
- **Parent Model:** N/A
- **Resources for more information:** The [official GitHub Repository](https://github.com/meituan/YOLOv6)
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model is meant to be used as a general object detector.
## Downstream Use [Optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
You can fine-tune this model for your specific task
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Don't be evil.
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This model often classifies objects incorrectly, especially when applied to videos. It does not handle crowds very well.
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations.
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
[More Information Needed]
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
# Model Examination
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
Please refer to the [official GitHub Repository](https://github.com/meituan/YOLOv6)
# Model Card Authors [optional]
[@nateraw](https://hf.co/nateraw)
# Model Card Contact
[@nateraw](https://hf.co/nateraw) - please leave a note in the discussions tab here
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
[More Information Needed]
</details>
|
ArthurBaia/xlm-roberta-base-squad-pt
|
ArthurBaia
| 2022-07-11T22:42:37Z | 7 | 2 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad_v1_pt",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-07-11T16:59:16Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad_v1_pt
model-index:
- name: xlm-roberta-base-squad-pt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-squad-pt
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad_v1_pt dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
- "epoch": 3.0,
- "eval_exact_match": 44.45600756859035,
- "eval_f1": 57.37953911779836,
- "eval_samples": 11095
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
mariastull/testpyramidsrnd
|
mariastull
| 2022-07-11T22:28:45Z | 8 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-07-11T22:28:40Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: mariastull/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AntiSquid/longTEST2ppo-LunarLander-v2
|
AntiSquid
| 2022-07-11T22:09:41Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-11T22:09:16Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 298.08 +/- 18.36
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tj-solergibert/distilbert-base-uncased-finetuned-emotion
|
tj-solergibert
| 2022-07-11T21:58:32Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-11T17:19:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9285
- name: F1
type: f1
value: 0.9285646975197546
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2158
- Accuracy: 0.9285
- F1: 0.9286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8235 | 1.0 | 250 | 0.3085 | 0.915 | 0.9127 |
| 0.2493 | 2.0 | 500 | 0.2158 | 0.9285 | 0.9286 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jonatasgrosman/exp_w2v2t_pt_vp-it_s529
|
jonatasgrosman
| 2022-07-11T20:21:11Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-11T20:20:26Z |
---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_vp-it_s529
Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_vp-it_s738
|
jonatasgrosman
| 2022-07-11T20:09:11Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-11T20:08:31Z |
---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_vp-it_s738
Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_r-wav2vec2_s957
|
jonatasgrosman
| 2022-07-11T19:51:40Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-11T19:51:07Z |
---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_r-wav2vec2_s957
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_r-wav2vec2_s468
|
jonatasgrosman
| 2022-07-11T19:48:19Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-11T19:47:54Z |
---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_r-wav2vec2_s468
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
paola-md/recipe-roberta-tis
|
paola-md
| 2022-07-11T19:45:57Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-11T16:22:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: recipe-roberta-tis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-roberta-tis
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8491
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.3552 | 1.0 | 1012 | 1.1292 |
| 1.1811 | 2.0 | 2024 | 1.0543 |
| 1.1095 | 3.0 | 3036 | 1.0122 |
| 1.0667 | 4.0 | 4048 | 0.9756 |
| 1.0345 | 5.0 | 5060 | 0.9478 |
| 1.0112 | 6.0 | 6072 | 0.9292 |
| 0.9922 | 7.0 | 7084 | 0.9137 |
| 0.9762 | 8.0 | 8096 | 0.9056 |
| 0.9627 | 9.0 | 9108 | 0.8977 |
| 0.9507 | 10.0 | 10120 | 0.8868 |
| 0.9411 | 11.0 | 11132 | 0.8823 |
| 0.9344 | 12.0 | 12144 | 0.8745 |
| 0.9261 | 13.0 | 13156 | 0.8688 |
| 0.9189 | 14.0 | 14168 | 0.8614 |
| 0.9133 | 15.0 | 15180 | 0.8609 |
| 0.9078 | 16.0 | 16192 | 0.8581 |
| 0.906 | 17.0 | 17204 | 0.8544 |
| 0.9015 | 18.0 | 18216 | 0.8537 |
| 0.8988 | 19.0 | 19228 | 0.8494 |
| 0.8975 | 20.0 | 20240 | 0.8491 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jonatasgrosman/exp_w2v2t_pt_xls-r_s657
|
jonatasgrosman
| 2022-07-11T19:45:15Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-11T19:44:32Z |
---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_xls-r_s657
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_xls-r_s17
|
jonatasgrosman
| 2022-07-11T19:38:03Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-11T19:37:21Z |
---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_xls-r_s17
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_unispeech-sat_s756
|
jonatasgrosman
| 2022-07-11T19:26:48Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"unispeech-sat",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-11T19:26:24Z |
---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_unispeech-sat_s756
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_vp-nl_s783
|
jonatasgrosman
| 2022-07-11T19:23:52Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-11T19:23:20Z |
---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_vp-nl_s783
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Sahara/finetuning-sentiment-model-3000-samples
|
Sahara
| 2022-07-11T19:23:33Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-11T14:06:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8533333333333334
- name: F1
type: f1
value: 0.8562091503267975
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3322
- Accuracy: 0.8533
- F1: 0.8562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jonatasgrosman/exp_w2v2t_pt_vp-nl_s6
|
jonatasgrosman
| 2022-07-11T19:17:20Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-11T19:16:53Z |
---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_vp-nl_s6
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_vp-nl_s833
|
jonatasgrosman
| 2022-07-11T19:13:31Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-11T19:12:53Z |
---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_vp-nl_s833
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_vp-es_s506
|
jonatasgrosman
| 2022-07-11T19:05:37Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-11T19:04:54Z |
---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_vp-es_s506
Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_vp-es_s454
|
jonatasgrosman
| 2022-07-11T19:02:09Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-11T19:01:28Z |
---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_vp-es_s454
Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.