modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-08 06:28:05
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 546
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-08 06:27:40
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
pere/t5-sami-oversetter
|
pere
| 2022-11-06T14:22:18Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-10-19T07:08:44Z |
---
license: apache-2.0
---
# T5 Sami - Norwegian - Sami
Placeholder for future model. Description is coming soon.
|
fgaim/tibert-base
|
fgaim
| 2022-11-06T14:12:22Z | 13 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"ti",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: ti
widget:
- text: "ዓቕሚ ደቂኣንስትዮ [MASK] ብግብሪ ተራእዩ"
---
# BERT Base for Tigrinya Language
We pre-train a BERT base-uncased model for Tigrinya on a dataset of 40 million tokens trained for 40 epochs.
This repo contains the original pre-trained Flax model that was trained on a TPU v3.8 and its corresponding PyTorch version.
## Hyperparameters
The hyperparameters corresponding to the model sizes mentioned above are as follows:
| Model Size | L | AH | HS | FFN | P | Seq |
|------------|----|----|-----|------|------|------|
| BASE | 12 | 12 | 768 | 3072 | 110M | 512 |
(L = number of layers; AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters; Seq = maximum sequence length.)
## Citation
If you use this model in your product or research, please cite as follows:
```
@article{Fitsum2021TiPLMs,
author={Fitsum Gaim and Wonsuk Yang and Jong C. Park},
title={Monolingual Pre-trained Language Models for Tigrinya},
year=2021,
publisher={WiNLP 2021 at EMNLP 2021}
}
```
|
phildav/PPO-LunarLander-v2
|
phildav
| 2022-11-06T13:51:44Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-06T13:16:33Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 144.76 +/- 139.03
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
keith97/bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-multi_news
|
keith97
| 2022-11-06T12:29:33Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:multi_news",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-06T09:46:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- multi_news
metrics:
- rouge
model-index:
- name: bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-multi_news
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: multi_news
type: multi_news
args: default
metrics:
- name: Rouge1
type: rouge
value: 38.5318
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-multi_news
This model is a fine-tuned version of [mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization](https://huggingface.co/mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization) on the multi_news dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3760
- Rouge1: 38.5318
- Rouge2: 12.7285
- Rougel: 21.4358
- Rougelsum: 33.4565
- Gen Len: 128.985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 4.6946 | 0.89 | 400 | 4.5393 | 37.164 | 11.5191 | 20.2519 | 32.1568 | 126.415 |
| 4.5128 | 1.78 | 800 | 4.4185 | 38.2345 | 12.2053 | 20.954 | 33.0667 | 128.975 |
| 4.2926 | 2.67 | 1200 | 4.3866 | 38.4475 | 12.6488 | 21.3046 | 33.2768 | 129.0 |
| 4.231 | 3.56 | 1600 | 4.3808 | 38.7008 | 12.6323 | 21.307 | 33.3693 | 128.955 |
| 4.125 | 4.44 | 2000 | 4.3760 | 38.5318 | 12.7285 | 21.4358 | 33.4565 | 128.985 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
halflings/diabetes_detection_v2
|
halflings
| 2022-11-06T11:21:56Z | 0 | 0 |
mlconsole
|
[
"mlconsole",
"tabular-classification",
"dataset:diabetes_detection",
"license:unknown",
"model-index",
"region:us"
] |
tabular-classification
| 2022-11-06T11:21:52Z |
---
license: unknown
inference: false
tags:
- mlconsole
- tabular-classification
library_name: mlconsole
metrics:
- accuracy
- loss
datasets:
- diabetes_detection
model-index:
- name: diabetes_detection_v2
results:
- task:
type: tabular-classification
name: tabular-classification
dataset:
type: diabetes_detection
name: diabetes_detection
metrics:
- type: accuracy
name: Accuracy
value: 0.7395833730697632
- type: loss
name: Model loss
value: 0.5416829586029053
---
# classification model trained on "diabetes_detection"
🤖 [Load and use this model](https://mlconsole.com/model/hf/halflings/diabetes_detection_v2) in one click.
🧑💻 [Train your own model](https://mlconsole.com) on ML Console.
|
halflings/iris_classification
|
halflings
| 2022-11-06T11:04:18Z | 0 | 0 |
mlconsole
|
[
"mlconsole",
"tabular-classification",
"dataset:iris_classification",
"license:unknown",
"model-index",
"region:us"
] |
tabular-classification
| 2022-11-06T11:04:14Z |
---
license: unknown
inference: false
tags:
- mlconsole
- tabular-classification
library_name: mlconsole
metrics:
- accuracy
- loss
datasets:
- iris_classification
model-index:
- name: iris_classification
results:
- task:
type: tabular-classification
name: tabular-classification
dataset:
type: iris_classification
name: iris_classification
metrics:
- type: accuracy
name: Accuracy
value: 1
- type: loss
name: Model loss
value: 0.6147858500480652
---
# classification model trained on "iris_classification"
🤖 [Load and use this model](https://mlconsole.com/model/hf/halflings/iris_classification) in one click.
🧑💻 [Train your own model](https://mlconsole.com) on ML Console.
|
nloc2578/QAG_Pegasus_3ep_eval
|
nloc2578
| 2022-11-06T10:39:22Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-26T17:08:04Z |
## Overview
```
Language model: Pegasus-xsum
Language: English
Downstream-task: Question-Answering Generation
Training data: SQuAD 2.0, NewsQA
Eval data: SQuAD 2.0, NewsQA
Infrastructure: Nvidia Tesla K80 12Gb RAM
```
## Hyperparameters
```
per_device_train_batch_size = 2
per_device_eval_batch_size = 2
num_train_epochs = 3
base_LM_model = "pegasus-xsum"
source_max_token_len = 256
target_max_token_len = 64
learning_rate = 5e-5
lr_schedule = LinearWarmup
warmup_steps = 150
```
## Usage
```python
import transformers
from transformers import PegasusForConditionalGeneration, PegasusTokenizerFast
model_name = 'nloc2578/QAG_Pegasus_3ep_eval'
tokenizer = PegasusTokenizerFast.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name, pad_token_id=tokenizer.eos_token_id)
text = '''The primary goal of distractor generation is generating answer
options that are plausibly answers to the question, and might appear
correct to a user who does know the correct answer. Distractors
should also be clearly distinct from the key and each other and
they should not be correct answers to the question (for questions
that might have multiple correct answers).'''
input_id = tokenizer(text, return_tensors='pt')
output = model.generate(input_id['input_ids'])
result = tokenizer.decode(output[0])
print(result)
```
|
vanme/vmehlin_distilbert-finetuned-squad
|
vanme
| 2022-11-06T10:37:11Z | 19 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-24T13:12:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: vmehlin_distilbert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vmehlin_distilbert-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
### co2_eq_emissions:
- emissions: 49.49 g
- source: eco2AI
- training_time: 00:31:54
- geographical_location: Bavaria, Germany
- hardware_used: Intel(R) Xeon(R) Gold 5215 CPUs (2devices) & NVIDIA A40 (1 device)
|
semperrr/korset
|
semperrr
| 2022-11-06T09:48:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-11-06T09:47:39Z |
https://zencastr.com/VeR-HD-Sin-novedad-en-el-frente-2022-Online-Espanol-Latino-REPELIS
https://zencastr.com/REPELIS-VeR-El-cuarto-pasajero-2022-Online-Pelicula-ompleta-y-HD
https://zencastr.com/REPELIS-VeR-Los-renglones-torcidos-de-Dios-2022-Online-Pelicula-ompleta-y-HD
https://zencastr.com/REPELIS-VeR-Amsterdam-2022-Online-Pelicula-ompleta-y-HD
https://zencastr.com/REPELIS-VeR-One-Piece-Film-Red-2022-Online-Pelicula-ompleta-y-HD
|
SADX/mishaljohn
|
SADX
| 2022-11-06T09:39:57Z | 29 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-06T09:18:02Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: mishaljohn
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8333333134651184
---
# mishaljohn
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### mishaljohn

#### not mishaljohn

|
okho0653/distilbert-base-uncased-finetuned-20pc
|
okho0653
| 2022-11-06T06:16:40Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-06T06:04:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-20pc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-20pc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3326
- Accuracy: 0.8642
- F1: 0.4762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 41 | 0.4428 | 0.8333 | 0.0 |
| No log | 2.0 | 82 | 0.4012 | 0.8333 | 0.0 |
| No log | 3.0 | 123 | 0.3619 | 0.8333 | 0.1818 |
| No log | 4.0 | 164 | 0.3488 | 0.8580 | 0.3784 |
| No log | 5.0 | 205 | 0.3326 | 0.8642 | 0.4762 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
adit94/sentenceTest_kbert4
|
adit94
| 2022-11-06T06:10:55Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-06T06:09:54Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5791 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
huggingtweets/alexabliss_wwe
|
huggingtweets
| 2022-11-06T05:06:07Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-06T04:18:55Z |
---
language: en
thumbnail: http://www.huggingtweets.com/alexabliss_wwe/1667711162135/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1271821102134833153/krgeswcX_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lexi (Kaufman) Cabrera</div>
<div style="text-align: center; font-size: 14px;">@alexabliss_wwe</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Lexi (Kaufman) Cabrera.
| Data | Lexi (Kaufman) Cabrera |
| --- | --- |
| Tweets downloaded | 3184 |
| Retweets | 1160 |
| Short tweets | 399 |
| Tweets kept | 1625 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2hgwztvb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @alexabliss_wwe's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2vlezdiv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2vlezdiv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/alexabliss_wwe')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
TTian/bert-mlm-feedback-512
|
TTian
| 2022-11-06T03:25:15Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-06T03:10:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-mlm-feedback-512
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-mlm-feedback-512
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6086 | 1.0 | 380 | 2.0284 |
| 2.4595 | 2.0 | 760 | 2.1917 |
| 2.41 | 3.0 | 1140 | 2.7014 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
uripper/GIANNIS
|
uripper
| 2022-11-06T02:34:15Z | 5 | 0 |
diffusers
|
[
"diffusers",
"unconditional-image-generation",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-01T10:20:02Z |
---
tags:
- unconditional-image-generation
---
|
sd-concepts-library/smurf-style
|
sd-concepts-library
| 2022-11-06T01:34:45Z | 0 | 4 | null |
[
"license:mit",
"region:us"
] | null | 2022-11-06T01:34:41Z |
---
license: mit
---
### Smurf Style on Stable Diffusion
This is the `<smurfy>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:










|
ryo-hsgw/xlm-roberta-base-finetuned-panx-it
|
ryo-hsgw
| 2022-11-05T23:43:08Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-05T23:39:48Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8224755700325732
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2521
- F1: 0.8225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8088 | 1.0 | 70 | 0.3423 | 0.7009 |
| 0.2844 | 2.0 | 140 | 0.2551 | 0.8027 |
| 0.1905 | 3.0 | 210 | 0.2521 | 0.8225 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ryo-hsgw/xlm-roberta-base-finetuned-panx-fr
|
ryo-hsgw
| 2022-11-05T23:39:34Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-05T23:34:50Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8325761399966348
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2978
- F1: 0.8326
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.574 | 1.0 | 191 | 0.3495 | 0.7889 |
| 0.2649 | 2.0 | 382 | 0.2994 | 0.8242 |
| 0.1716 | 3.0 | 573 | 0.2978 | 0.8326 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ryo-hsgw/xlm-roberta-base-finetuned-panx-de-fr
|
ryo-hsgw
| 2022-11-05T23:33:42Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-05T23:23:52Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1643
- F1: 0.8626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2891 | 1.0 | 715 | 0.1780 | 0.8288 |
| 0.1472 | 2.0 | 1430 | 0.1633 | 0.8488 |
| 0.0948 | 3.0 | 2145 | 0.1643 | 0.8626 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ffigueiredo/dataset
|
ffigueiredo
| 2022-11-05T22:35:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-11-05T19:46:43Z |
# Dataset - Modelos Preditivos Conexionistas 2022.01
### Fábio Figueiredo
Detecção de Imagens|75|4|
|--|--|--|
### Com base nos produtos vendidos pela empresa VOLTZ MOTORS DO BRASIL S.A, iremos detectar seus 4 produtos principais.
### Faremos assim a detecção de imagens entre o modelo scooter EV1 SPORT, o modelo street EVS e os modelos corporativos Miles e EVS Work.
|
huggingtweets/aeronautblue
|
huggingtweets
| 2022-11-05T21:43:10Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-05T21:39:42Z |
---
language: en
thumbnail: http://www.huggingtweets.com/aeronautblue/1667684473479/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1515688111526891521/o_3LoG40_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">blue</div>
<div style="text-align: center; font-size: 14px;">@aeronautblue</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from blue.
| Data | blue |
| --- | --- |
| Tweets downloaded | 2373 |
| Retweets | 460 |
| Short tweets | 379 |
| Tweets kept | 1534 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/e1wsp7qa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @aeronautblue's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/61928z1e) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/61928z1e/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/aeronautblue')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
tatakof/ppo-LunarLander-v2
|
tatakof
| 2022-11-05T21:38:58Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-05T17:16:16Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.23 +/- 24.06
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CTAE4OK/Niki
|
CTAE4OK
| 2022-11-05T21:14:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-11-05T21:09:35Z |
from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained("DGSpitzer/Cyberpunk-Anime-Diffusion")
|
aleqsay/af
|
aleqsay
| 2022-11-05T20:51:43Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-05T20:51:43Z |
---
license: creativeml-openrail-m
---
|
halflings/diabetes_detection_fixed3
|
halflings
| 2022-11-05T20:43:11Z | 0 | 0 |
mlconsole
|
[
"mlconsole",
"tabular-classification",
"dataset:diabetes_detection",
"license:unknown",
"model-index",
"region:us"
] |
tabular-classification
| 2022-11-05T20:43:08Z |
---
license: unknown
inference: false
tags:
- mlconsole
- tabular-classification
library_name: mlconsole
metrics:
- accuracy
- loss
datasets:
- diabetes_detection
model-index:
- name: diabetes_detection_fixed3
results:
- task:
type: tabular-classification
name: tabular-classification
dataset:
type: diabetes_detection
name: diabetes_detection
metrics:
- type: accuracy
name: Accuracy
value: 0.78125
- type: loss
name: Model loss
value: 0.523585319519043
---
# classification model trained on "diabetes_detection"
🤖 [Load and use this model](https://mlconsole.com/model/hf/halflings/diabetes_detection_fixed3) in one click.
🧑💻 [Train your own model](https://mlconsole.com) on ML Console.
|
radeveljic99/ppo-LunarLander-v2
|
radeveljic99
| 2022-11-05T20:24:44Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-05T20:02:50Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 174.96 +/- 12.10
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Ballesteyoni/Woman
|
Ballesteyoni
| 2022-11-05T18:11:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-11-05T18:09:52Z |
Women dancing in a circle in menstrual blood in moon shadow with chamans
|
barbarabax/unicorns
|
barbarabax
| 2022-11-05T18:02:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-11-05T15:44:14Z |
Use unicornstyle in prompt ------
language:
- "List of ISO 639-1 code for your language"
- English
tags:
- ckpt
- unicorn
license: "openrail"
|
ocm/distilbert-base-uncased-finetuned-emotion
|
ocm
| 2022-11-05T17:45:19Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-29T11:15:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.935
- name: F1
type: f1
value: 0.9351083637430424
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1582
- Accuracy: 0.935
- F1: 0.9351
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7703 | 1.0 | 250 | 0.2588 | 0.918 | 0.9165 |
| 0.2031 | 2.0 | 500 | 0.1773 | 0.928 | 0.9282 |
| 0.1385 | 3.0 | 750 | 0.1593 | 0.934 | 0.9342 |
| 0.1101 | 4.0 | 1000 | 0.1582 | 0.935 | 0.9351 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
GGAdvent/distilbert-base-uncased-finetuned-cola
|
GGAdvent
| 2022-11-05T16:55:55Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-05T16:45:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: train
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5416385858307549
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8359
- Matthews Correlation: 0.5416
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5239 | 1.0 | 535 | 0.5284 | 0.4297 |
| 0.3437 | 2.0 | 1070 | 0.5006 | 0.5166 |
| 0.2301 | 3.0 | 1605 | 0.5707 | 0.5321 |
| 0.1814 | 4.0 | 2140 | 0.7802 | 0.5245 |
| 0.1271 | 5.0 | 2675 | 0.8359 | 0.5416 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
erose/wav2vec2-malayalam_english-3h
|
erose
| 2022-11-05T16:11:28Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"malayalam",
"ml_en",
"code-switching",
"ml",
"en",
"dataset:erose/code_switching-ml-en",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-03T13:25:37Z |
---
license: apache-2.0
description: wav2vec2 based model for malayalam-english code-switched speech
language:
- ml
- en
tags:
- automatic-speech-recognition
- malayalam
- ml_en
- code-switching
datasets:
- erose/code_switching-ml-en
model-index:
- name: wav2vec2 ml_en
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: erose/code_switching-ml-en (test set)
type: code_switching-ml-en
args: ml_en
metrics:
- name: Test WER
type: wer
value: 58.93
- name: Test CER
type: cer
value: 19.45
---
|
pepa/deberta-v3-base-fever
|
pepa
| 2022-11-05T15:03:56Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"dataset:copenlu/fever_gold_evidence",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-29T07:36:51Z |
---
tags:
- generated_from_trainer
model-index:
- name: deberta-v3-base-fever
results: []
datasets:
- copenlu/fever_gold_evidence
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-fever
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.5146
- eval_p: 0.8912
- eval_r: 0.8904
- eval_f1: 0.8897
- eval_runtime: 49.9875
- eval_samples_per_second: 376.194
- eval_steps_per_second: 47.032
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 4
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.12.1
|
pepa/deberta-v3-large-fever
|
pepa
| 2022-11-05T15:03:41Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"dataset:copenlu/fever_gold_evidence",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-01T20:22:41Z |
---
tags:
- generated_from_trainer
model-index:
- name: deberta-v3-large-fever
results: []
datasets:
- copenlu/fever_gold_evidence
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-fever
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.5286
- eval_p: 0.8827
- eval_r: 0.8826
- eval_f1: 0.8816
- eval_runtime: 231.4062
- eval_samples_per_second: 81.264
- eval_steps_per_second: 10.16
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 4
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.12.1
|
pepa/deberta-v3-small-fever
|
pepa
| 2022-11-05T15:03:10Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"dataset:copenlu/fever_gold_evidence",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-29T07:39:36Z |
---
tags:
- generated_from_trainer
model-index:
- name: deberta-v3-small-fever
results: []
datasets:
- copenlu/fever_gold_evidence
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-small-fever
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4816
- eval_p: 0.8811
- eval_r: 0.8783
- eval_f1: 0.8780
- eval_runtime: 28.4486
- eval_samples_per_second: 661.017
- eval_steps_per_second: 82.64
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 4
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.12.1
|
kohama1988/distilbert-base-uncased-finetuned-emotion
|
kohama1988
| 2022-11-05T15:01:44Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-05T14:33:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9235
- name: F1
type: f1
value: 0.9236445718445864
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2183
- Accuracy: 0.9235
- F1: 0.9236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3092 | 0.909 | 0.9070 |
| No log | 2.0 | 500 | 0.2183 | 0.9235 | 0.9236 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
pere/whisper-small-npsc
|
pere
| 2022-11-05T14:41:27Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"nn",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-04T21:47:16Z |
---
language:
- nn
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: whisper-small-npsc
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: 16K_mp3_bokmaal
split: train
args: 16K_mp3_bokmaal
metrics:
- name: Wer
type: wer
value: 12.925418803583286
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-npsc
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2028
- Wer: 12.9254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 6000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3922 | 0.18 | 500 | 0.3975 | 24.2055 |
| 0.2893 | 0.36 | 1000 | 0.3139 | 20.1507 |
| 0.2471 | 0.54 | 1500 | 0.2733 | 17.4449 |
| 0.2159 | 0.72 | 2000 | 0.2488 | 16.2681 |
| 0.2195 | 0.89 | 2500 | 0.2304 | 15.0577 |
| 0.1178 | 1.07 | 3000 | 0.2245 | 14.5968 |
| 0.1099 | 1.25 | 3500 | 0.2183 | 14.1118 |
| 0.1059 | 1.43 | 4000 | 0.2136 | 13.7914 |
| 0.1156 | 1.61 | 4500 | 0.2072 | 13.7491 |
| 0.1025 | 1.79 | 5000 | 0.2034 | 13.1515 |
| 0.1123 | 1.97 | 5500 | 0.2006 | 13.0284 |
| 0.0734 | 2.15 | 6000 | 0.2028 | 12.9254 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
AlanRobotics/bert_q_a_test
|
AlanRobotics
| 2022-11-05T13:51:56Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-05T12:18:36Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: bert_q_a_test
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert_q_a_test
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.1
|
wilsonsob/projetoFinal
|
wilsonsob
| 2022-11-05T13:16:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-11-03T17:43:05Z |
# Projeto Final - Modelos Preditivos Conexionistas
### Nome do aluno
|**Tipo de Projeto**|**Modelo Selecionado**|**Linguagem**|
|--|--|--|
|<br>Deteção de Objetos|YOLOv5|PyTorch|
## Performance
O modelo treinado possui performance de **69%**.
### Output do bloco de treinamento
<details>
<summary>Click to expand!</summary>
```text
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
0/2999 14.1G 0.1176 0.03496 0.04929 227 640: 100% 5/5 [00:08<00:00, 1.65s/it]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:04<00:00, 4.23s/it]
all 79 172 0.00117 0.29 0.00144 0.000293
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
1/2999 13.3G 0.11 0.03478 0.04837 216 640: 100% 5/5 [00:03<00:00, 1.34it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.98s/it]
all 79 172 0.00148 0.36 0.00143 0.000484
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
2/2999 13.3G 0.09838 0.03372 0.04588 189 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.42s/it]
all 79 172 0.00276 0.37 0.00585 0.00135
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
3/2999 13.3G 0.08941 0.03499 0.04303 171 640: 100% 5/5 [00:03<00:00, 1.39it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.42s/it]
all 79 172 0.00324 0.61 0.00878 0.00303
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
4/2999 13.3G 0.08229 0.03798 0.03902 230 640: 100% 5/5 [00:03<00:00, 1.39it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.24s/it]
all 79 172 0.00479 0.803 0.0192 0.0057
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
5/2999 13.3G 0.07235 0.03762 0.03592 187 640: 100% 5/5 [00:03<00:00, 1.39it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.32s/it]
all 79 172 0.772 0.0641 0.0685 0.0199
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
6/2999 13.3G 0.06836 0.03883 0.03304 227 640: 100% 5/5 [00:03<00:00, 1.38it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.07s/it]
all 79 172 0.332 0.221 0.0677 0.0184
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
7/2999 13.3G 0.06247 0.03535 0.0311 201 640: 100% 5/5 [00:03<00:00, 1.38it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.57s/it]
all 79 172 0.326 0.266 0.082 0.0217
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
8/2999 13.3G 0.05948 0.0349 0.02835 161 640: 100% 5/5 [00:03<00:00, 1.36it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.30s/it]
all 79 172 0.393 0.295 0.175 0.0498
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
9/2999 13.3G 0.05892 0.03628 0.02495 221 640: 100% 5/5 [00:03<00:00, 1.38it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.40s/it]
all 79 172 0.386 0.303 0.138 0.0436
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
10/2999 13.3G 0.05797 0.03046 0.02325 158 640: 100% 5/5 [00:03<00:00, 1.39it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.24s/it]
all 79 172 0.449 0.376 0.226 0.0926
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
11/2999 13.3G 0.05604 0.03243 0.02248 226 640: 100% 5/5 [00:03<00:00, 1.37it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.23s/it]
all 79 172 0.519 0.326 0.3 0.129
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
12/2999 13.3G 0.05705 0.03044 0.02158 181 640: 100% 5/5 [00:03<00:00, 1.32it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.08s/it]
all 79 172 0.508 0.342 0.361 0.191
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
13/2999 13.3G 0.05534 0.02701 0.01887 167 640: 100% 5/5 [00:03<00:00, 1.37it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.29s/it]
all 79 172 0.429 0.367 0.242 0.0978
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
14/2999 13.3G 0.05445 0.03095 0.01875 188 640: 100% 5/5 [00:03<00:00, 1.35it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.15s/it]
all 79 172 0.517 0.495 0.393 0.178
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
15/2999 13.3G 0.05658 0.02785 0.01648 175 640: 100% 5/5 [00:03<00:00, 1.33it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.12s/it]
all 79 172 0.512 0.479 0.358 0.177
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
16/2999 13.3G 0.05553 0.02625 0.01534 186 640: 100% 5/5 [00:03<00:00, 1.37it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.13s/it]
all 79 172 0.533 0.464 0.412 0.178
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
17/2999 13.3G 0.0524 0.02705 0.01531 187 640: 100% 5/5 [00:04<00:00, 1.18it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:02<00:00, 2.25s/it]
all 79 172 0.304 0.483 0.299 0.12
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
18/2999 13.3G 0.05295 0.02631 0.01442 162 640: 100% 5/5 [00:04<00:00, 1.01it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.99s/it]
all 79 172 0.649 0.416 0.435 0.203
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
19/2999 13.3G 0.05205 0.027 0.01497 227 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.01s/it]
all 79 172 0.305 0.518 0.336 0.151
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
20/2999 13.3G 0.05057 0.02601 0.01201 190 640: 100% 5/5 [00:03<00:00, 1.32it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.02s/it]
all 79 172 0.456 0.594 0.442 0.192
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
21/2999 13.3G 0.0488 0.02679 0.01386 138 640: 100% 5/5 [00:03<00:00, 1.32it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.07it/s]
all 79 172 0.418 0.586 0.428 0.221
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
22/2999 13.3G 0.04713 0.02576 0.01446 215 640: 100% 5/5 [00:03<00:00, 1.33it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.01it/s]
all 79 172 0.642 0.477 0.467 0.23
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
23/2999 13.3G 0.04759 0.02555 0.0115 179 640: 100% 5/5 [00:03<00:00, 1.32it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.06s/it]
all 79 172 0.611 0.474 0.436 0.21
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
24/2999 13.3G 0.0453 0.02547 0.01341 218 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.661 0.485 0.517 0.273
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
25/2999 13.3G 0.04469 0.02657 0.01159 229 640: 100% 5/5 [00:03<00:00, 1.34it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.62 0.42 0.481 0.236
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
26/2999 13.3G 0.04451 0.02416 0.0126 202 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.15s/it]
all 79 172 0.719 0.431 0.502 0.28
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
27/2999 13.3G 0.04454 0.02421 0.0113 165 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.484 0.424 0.438 0.217
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
28/2999 13.3G 0.04353 0.02453 0.01121 222 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.328 0.335 0.307 0.165
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
29/2999 13.3G 0.04318 0.024 0.01177 168 640: 100% 5/5 [00:03<00:00, 1.26it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.01it/s]
all 79 172 0.399 0.317 0.292 0.141
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
30/2999 13.3G 0.04106 0.0244 0.01042 202 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.08it/s]
all 79 172 0.654 0.512 0.52 0.29
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
31/2999 13.3G 0.04151 0.02421 0.01037 193 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.06s/it]
all 79 172 0.73 0.389 0.46 0.254
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
32/2999 13.3G 0.04187 0.02569 0.009244 193 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.08s/it]
all 79 172 0.372 0.432 0.397 0.184
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
33/2999 13.3G 0.04139 0.02411 0.007808 191 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.03s/it]
all 79 172 0.657 0.571 0.583 0.354
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
34/2999 13.3G 0.03919 0.02373 0.008649 186 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.04s/it]
all 79 172 0.719 0.515 0.556 0.273
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
35/2999 13.3G 0.03933 0.02373 0.01062 194 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.12it/s]
all 79 172 0.646 0.496 0.499 0.297
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
36/2999 13.3G 0.03985 0.02292 0.01068 171 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.597 0.514 0.424 0.212
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
37/2999 13.3G 0.04022 0.02436 0.01181 206 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.468 0.473 0.381 0.199
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
38/2999 13.3G 0.0392 0.02418 0.01042 207 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.589 0.442 0.495 0.25
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
39/2999 13.3G 0.03949 0.0232 0.008525 175 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.07s/it]
all 79 172 0.578 0.413 0.467 0.233
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
40/2999 13.3G 0.03951 0.02309 0.00936 189 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.03s/it]
all 79 172 0.473 0.597 0.552 0.319
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
41/2999 13.3G 0.03824 0.02332 0.01016 183 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.03s/it]
all 79 172 0.46 0.647 0.494 0.284
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
42/2999 13.3G 0.03829 0.02417 0.009787 197 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.01it/s]
all 79 172 0.289 0.588 0.436 0.211
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
43/2999 13.3G 0.03897 0.02372 0.009366 182 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.01it/s]
all 79 172 0.272 0.612 0.385 0.217
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
44/2999 13.3G 0.0391 0.02348 0.008347 223 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.01it/s]
all 79 172 0.621 0.392 0.457 0.238
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
45/2999 13.3G 0.03792 0.02103 0.01101 159 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.17s/it]
all 79 172 0.543 0.488 0.527 0.293
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
46/2999 13.3G 0.03747 0.02327 0.009737 211 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.04s/it]
all 79 172 0.423 0.621 0.509 0.278
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
47/2999 13.3G 0.03701 0.02207 0.008706 189 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.17s/it]
all 79 172 0.459 0.505 0.448 0.231
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
48/2999 13.3G 0.03722 0.02309 0.008686 179 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.14it/s]
all 79 172 0.488 0.637 0.532 0.289
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
49/2999 13.3G 0.03637 0.02043 0.007798 179 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.10it/s]
all 79 172 0.732 0.443 0.491 0.267
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
50/2999 13.3G 0.03709 0.02212 0.007632 194 640: 100% 5/5 [00:03<00:00, 1.32it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.29s/it]
all 79 172 0.468 0.676 0.564 0.324
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
51/2999 13.3G 0.03752 0.02221 0.009035 168 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.417 0.667 0.451 0.248
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
52/2999 13.3G 0.03637 0.02205 0.007745 216 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.08s/it]
all 79 172 0.602 0.533 0.563 0.305
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
53/2999 13.3G 0.03561 0.02235 0.006919 213 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.10s/it]
all 79 172 0.71 0.514 0.575 0.317
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
54/2999 13.3G 0.0375 0.02151 0.007491 189 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.724 0.39 0.472 0.246
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
55/2999 13.3G 0.03676 0.02192 0.007115 211 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.06s/it]
all 79 172 0.617 0.509 0.502 0.308
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
56/2999 13.3G 0.03543 0.02149 0.008343 174 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.599 0.519 0.537 0.302
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
57/2999 13.3G 0.03516 0.02129 0.00804 185 640: 100% 5/5 [00:03<00:00, 1.26it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.09s/it]
all 79 172 0.48 0.465 0.442 0.253
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
58/2999 13.3G 0.03451 0.02335 0.009221 200 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.09s/it]
all 79 172 0.503 0.407 0.386 0.196
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
59/2999 13.3G 0.0356 0.02126 0.006811 248 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.697 0.329 0.407 0.219
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
60/2999 13.3G 0.03437 0.02229 0.007112 226 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.03s/it]
all 79 172 0.422 0.53 0.456 0.252
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
61/2999 13.3G 0.03398 0.02009 0.007508 209 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.12s/it]
all 79 172 0.716 0.369 0.501 0.286
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
62/2999 13.3G 0.03399 0.02136 0.007171 189 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.07s/it]
all 79 172 0.492 0.623 0.51 0.289
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
63/2999 13.3G 0.0354 0.02072 0.008472 176 640: 100% 5/5 [00:03<00:00, 1.32it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.49s/it]
all 79 172 0.623 0.603 0.616 0.373
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
64/2999 13.3G 0.03459 0.02183 0.008503 187 640: 100% 5/5 [00:04<00:00, 1.21it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.17s/it]
all 79 172 0.6 0.618 0.642 0.324
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
65/2999 13.3G 0.03388 0.02139 0.008551 205 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.02s/it]
all 79 172 0.614 0.314 0.34 0.18
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
66/2999 13.3G 0.03483 0.02107 0.009369 173 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.10it/s]
all 79 172 0.494 0.505 0.489 0.257
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
67/2999 13.3G 0.0334 0.0195 0.006718 162 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.05s/it]
all 79 172 0.608 0.412 0.454 0.246
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
68/2999 13.3G 0.03517 0.02186 0.008161 200 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.02it/s]
all 79 172 0.691 0.441 0.521 0.324
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
69/2999 13.3G 0.03397 0.0213 0.007542 192 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.07it/s]
all 79 172 0.598 0.403 0.453 0.233
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
70/2999 13.3G 0.03464 0.02079 0.00808 220 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.11s/it]
all 79 172 0.657 0.415 0.505 0.287
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
71/2999 13.3G 0.03414 0.02142 0.006937 149 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.529 0.476 0.479 0.28
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
72/2999 13.3G 0.03195 0.02103 0.007308 189 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.01s/it]
all 79 172 0.611 0.424 0.426 0.258
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
73/2999 13.3G 0.03293 0.0218 0.00651 222 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.15it/s]
all 79 172 0.728 0.479 0.542 0.337
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
74/2999 13.3G 0.03236 0.01866 0.009649 127 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.01s/it]
all 79 172 0.588 0.594 0.595 0.36
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
75/2999 13.3G 0.03235 0.01942 0.007454 176 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.04s/it]
all 79 172 0.713 0.562 0.592 0.334
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
76/2999 13.3G 0.03392 0.02069 0.006954 187 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.00it/s]
all 79 172 0.753 0.474 0.537 0.31
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
77/2999 13.3G 0.03292 0.02024 0.00708 179 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.724 0.502 0.523 0.285
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
78/2999 13.3G 0.03178 0.021 0.006592 208 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.724 0.503 0.527 0.304
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
79/2999 13.3G 0.03131 0.01963 0.0057 187 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.12s/it]
all 79 172 0.703 0.471 0.539 0.329
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
80/2999 13.3G 0.03203 0.02018 0.008287 198 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.04s/it]
all 79 172 0.77 0.499 0.564 0.324
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
81/2999 13.3G 0.03084 0.01961 0.007307 206 640: 100% 5/5 [00:04<00:00, 1.16it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.81s/it]
all 79 172 0.687 0.463 0.535 0.318
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
82/2999 13.3G 0.03089 0.02012 0.006733 202 640: 100% 5/5 [00:04<00:00, 1.20it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.27s/it]
all 79 172 0.597 0.511 0.501 0.287
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
83/2999 13.3G 0.03064 0.01998 0.005996 211 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.18s/it]
all 79 172 0.601 0.418 0.48 0.25
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
84/2999 13.3G 0.03132 0.01948 0.004924 206 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.08s/it]
all 79 172 0.651 0.478 0.534 0.317
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
85/2999 13.3G 0.03003 0.01933 0.006001 216 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.02s/it]
all 79 172 0.718 0.447 0.572 0.33
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
86/2999 13.3G 0.0322 0.01857 0.006746 204 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.712 0.534 0.57 0.315
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
87/2999 13.3G 0.02937 0.0195 0.007804 208 640: 100% 5/5 [00:03<00:00, 1.26it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.667 0.539 0.59 0.377
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
88/2999 13.3G 0.03086 0.02039 0.007138 200 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.08it/s]
all 79 172 0.628 0.543 0.558 0.323
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
89/2999 13.3G 0.03102 0.01957 0.006189 216 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.09it/s]
all 79 172 0.558 0.57 0.476 0.272
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
90/2999 13.3G 0.03026 0.02042 0.008099 211 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.668 0.57 0.512 0.306
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
91/2999 13.3G 0.02908 0.01987 0.007552 200 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.01it/s]
all 79 172 0.616 0.483 0.481 0.278
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
92/2999 13.3G 0.03033 0.01963 0.007505 171 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.11it/s]
all 79 172 0.808 0.519 0.616 0.369
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
93/2999 13.3G 0.03 0.01985 0.007565 192 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.03s/it]
all 79 172 0.741 0.466 0.533 0.313
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
94/2999 13.3G 0.03061 0.01982 0.006072 164 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.839 0.448 0.552 0.332
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
95/2999 13.3G 0.02993 0.01983 0.00618 197 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.11s/it]
all 79 172 0.783 0.435 0.549 0.35
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
96/2999 13.3G 0.0297 0.01942 0.004898 193 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.808 0.534 0.602 0.383
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
97/2999 13.3G 0.03024 0.0192 0.007548 199 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.12it/s]
all 79 172 0.677 0.545 0.644 0.388
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
98/2999 13.3G 0.02892 0.01992 0.006328 202 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.659 0.531 0.599 0.365
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
99/2999 13.3G 0.02903 0.01783 0.008322 179 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.12it/s]
all 79 172 0.598 0.545 0.559 0.325
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
100/2999 13.3G 0.03175 0.01939 0.005829 211 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.09it/s]
all 79 172 0.602 0.477 0.479 0.279
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
101/2999 13.3G 0.02981 0.01811 0.006895 187 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.12s/it]
all 79 172 0.532 0.483 0.449 0.254
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
102/2999 13.3G 0.02894 0.01893 0.007293 178 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.76 0.362 0.532 0.311
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
103/2999 13.3G 0.02853 0.01932 0.005571 233 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.00it/s]
all 79 172 0.603 0.514 0.581 0.337
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
104/2999 13.3G 0.02875 0.01752 0.006674 162 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.76 0.454 0.57 0.332
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
105/2999 13.3G 0.02874 0.01946 0.006926 211 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.08it/s]
all 79 172 0.694 0.45 0.506 0.289
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
106/2999 13.3G 0.02967 0.01745 0.005547 205 640: 100% 5/5 [00:04<00:00, 1.22it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.75s/it]
all 79 172 0.748 0.507 0.519 0.296
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
107/2999 13.3G 0.03031 0.01972 0.006291 210 640: 100% 5/5 [00:04<00:00, 1.25it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.18s/it]
all 79 172 0.745 0.489 0.565 0.335
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
108/2999 13.3G 0.02897 0.01927 0.006829 186 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.743 0.543 0.545 0.312
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
109/2999 13.3G 0.03018 0.01939 0.006308 237 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.12s/it]
all 79 172 0.715 0.591 0.575 0.308
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
110/2999 13.3G 0.02912 0.01956 0.006358 192 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.01it/s]
all 79 172 0.717 0.545 0.581 0.347
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
111/2999 13.3G 0.02963 0.01883 0.007443 157 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.732 0.498 0.617 0.348
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
112/2999 13.3G 0.02796 0.01824 0.006296 226 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.00it/s]
all 79 172 0.67 0.632 0.623 0.384
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
113/2999 13.3G 0.02855 0.01817 0.005978 190 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.02s/it]
all 79 172 0.675 0.574 0.594 0.321
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
114/2999 13.3G 0.02922 0.01838 0.006151 185 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.07it/s]
all 79 172 0.782 0.457 0.584 0.336
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
115/2999 13.3G 0.02933 0.0188 0.008184 161 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.10s/it]
all 79 172 0.588 0.567 0.559 0.324
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
116/2999 13.3G 0.02704 0.0186 0.005759 217 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.01s/it]
all 79 172 0.712 0.594 0.61 0.387
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
117/2999 13.3G 0.02805 0.01756 0.007583 183 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.72 0.483 0.574 0.36
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
118/2999 13.3G 0.02756 0.0179 0.006019 190 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.11it/s]
all 79 172 0.726 0.576 0.603 0.351
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
119/2999 13.3G 0.02793 0.01717 0.007643 168 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.00it/s]
all 79 172 0.735 0.538 0.595 0.338
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
120/2999 13.3G 0.0286 0.01874 0.005134 223 640: 100% 5/5 [00:03<00:00, 1.26it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.11s/it]
all 79 172 0.653 0.528 0.551 0.323
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
121/2999 13.3G 0.0283 0.01745 0.005626 189 640: 100% 5/5 [00:03<00:00, 1.26it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.763 0.444 0.544 0.34
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
122/2999 13.3G 0.02849 0.01963 0.00636 189 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.06s/it]
all 79 172 0.728 0.518 0.622 0.356
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
123/2999 13.3G 0.02766 0.01739 0.005559 157 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.14it/s]
all 79 172 0.705 0.387 0.452 0.269
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
124/2999 13.3G 0.02744 0.01842 0.007753 207 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.756 0.553 0.605 0.347
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
125/2999 13.3G 0.02833 0.01658 0.005275 144 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.02it/s]
all 79 172 0.771 0.441 0.507 0.327
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
126/2999 13.3G 0.02873 0.01809 0.006018 230 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.12s/it]
all 79 172 0.777 0.509 0.608 0.33
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
127/2999 13.3G 0.02782 0.01771 0.005374 184 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.736 0.517 0.548 0.345
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
128/2999 13.3G 0.02666 0.01821 0.004101 210 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.09it/s]
all 79 172 0.638 0.596 0.615 0.336
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
129/2999 13.3G 0.02662 0.01685 0.005201 182 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.07it/s]
all 79 172 0.722 0.597 0.629 0.366
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
130/2999 13.3G 0.02622 0.01672 0.006191 144 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.10it/s]
all 79 172 0.802 0.468 0.542 0.325
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
131/2999 13.3G 0.02667 0.01867 0.00618 197 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.02s/it]
all 79 172 0.609 0.454 0.49 0.301
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
132/2999 13.3G 0.02787 0.01969 0.005775 229 640: 100% 5/5 [00:03<00:00, 1.26it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.525 0.437 0.459 0.266
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
133/2999 13.3G 0.02774 0.01836 0.006047 212 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.13it/s]
all 79 172 0.632 0.52 0.575 0.317
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
134/2999 13.3G 0.02741 0.01768 0.00579 219 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.03s/it]
all 79 172 0.625 0.632 0.585 0.37
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
135/2999 13.3G 0.02713 0.01778 0.005949 217 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.02it/s]
all 79 172 0.514 0.585 0.452 0.257
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
136/2999 13.3G 0.0277 0.01698 0.007301 162 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.07it/s]
all 79 172 0.578 0.407 0.444 0.255
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
137/2999 13.3G 0.0272 0.01767 0.004752 179 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.479 0.457 0.483 0.284
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
138/2999 13.3G 0.02749 0.018 0.004356 216 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.05s/it]
all 79 172 0.729 0.473 0.532 0.288
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
139/2999 13.3G 0.02768 0.01737 0.006317 188 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.09it/s]
all 79 172 0.657 0.548 0.533 0.298
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
140/2999 13.3G 0.02608 0.01767 0.00451 184 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.10s/it]
all 79 172 0.737 0.553 0.586 0.347
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
141/2999 13.3G 0.02657 0.01743 0.004523 201 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.51s/it]
all 79 172 0.781 0.515 0.606 0.363
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
142/2999 13.3G 0.02724 0.01774 0.006709 184 640: 100% 5/5 [00:04<00:00, 1.16it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.88s/it]
all 79 172 0.742 0.576 0.637 0.346
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
143/2999 13.3G 0.02575 0.01799 0.005323 212 640: 100% 5/5 [00:04<00:00, 1.25it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.30s/it]
all 79 172 0.776 0.492 0.563 0.35
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
144/2999 13.3G 0.02654 0.01726 0.005264 166 640: 100% 5/5 [00:04<00:00, 1.19it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.60s/it]
all 79 172 0.679 0.535 0.565 0.314
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
145/2999 13.3G 0.02687 0.01829 0.005005 250 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.03s/it]
all 79 172 0.81 0.523 0.563 0.342
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
146/2999 13.3G 0.02672 0.01687 0.005595 208 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.07s/it]
all 79 172 0.76 0.503 0.556 0.328
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
147/2999 13.3G 0.02691 0.01723 0.005911 180 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.08s/it]
all 79 172 0.748 0.537 0.579 0.338
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
148/2999 13.3G 0.02626 0.01806 0.004589 227 640: 100% 5/5 [00:04<00:00, 1.20it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.39s/it]
all 79 172 0.795 0.512 0.556 0.34
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
149/2999 13.3G 0.02653 0.01662 0.005405 177 640: 100% 5/5 [00:04<00:00, 1.24it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.12it/s]
all 79 172 0.791 0.464 0.531 0.328
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
150/2999 13.3G 0.0274 0.01625 0.006181 147 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.749 0.544 0.602 0.352
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
151/2999 13.3G 0.02522 0.01715 0.004715 184 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.08it/s]
all 79 172 0.801 0.549 0.596 0.363
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
152/2999 13.3G 0.02576 0.01662 0.004771 139 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.825 0.503 0.559 0.349
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
153/2999 13.3G 0.02895 0.01797 0.005624 185 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.05s/it]
all 79 172 0.837 0.465 0.54 0.341
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
154/2999 13.3G 0.02476 0.01641 0.005789 184 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.07it/s]
all 79 172 0.79 0.498 0.564 0.341
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
155/2999 13.3G 0.02548 0.01872 0.005374 212 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.11it/s]
all 79 172 0.824 0.498 0.573 0.347
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
156/2999 13.3G 0.02632 0.01815 0.006032 229 640: 100% 5/5 [00:04<00:00, 1.21it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.715 0.549 0.597 0.343
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
157/2999 13.3G 0.02511 0.01649 0.005817 195 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.17it/s]
all 79 172 0.821 0.566 0.663 0.385
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
158/2999 13.3G 0.02516 0.01653 0.005879 159 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.02it/s]
all 79 172 0.686 0.641 0.643 0.372
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
159/2999 13.3G 0.02657 0.01595 0.005654 185 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.00s/it]
all 79 172 0.676 0.625 0.652 0.378
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
160/2999 13.3G 0.02582 0.0173 0.005202 215 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.729 0.547 0.621 0.351
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
161/2999 13.3G 0.02607 0.01732 0.006912 218 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.679 0.483 0.547 0.322
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
162/2999 13.3G 0.02534 0.01606 0.005221 169 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.06s/it]
all 79 172 0.799 0.44 0.601 0.379
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
163/2999 13.3G 0.02638 0.01726 0.006002 216 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.00s/it]
all 79 172 0.689 0.635 0.631 0.381
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
164/2999 13.3G 0.02443 0.01882 0.005191 279 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.01it/s]
all 79 172 0.721 0.67 0.685 0.404
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
165/2999 13.3G 0.02414 0.01719 0.003583 182 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.67 0.646 0.681 0.409
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
166/2999 13.3G 0.02653 0.01778 0.005552 195 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.796 0.659 0.663 0.362
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
167/2999 13.3G 0.02577 0.01602 0.005825 178 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.14s/it]
all 79 172 0.721 0.69 0.704 0.404
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
168/2999 13.3G 0.02503 0.01867 0.004954 244 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.692 0.658 0.701 0.417
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
169/2999 13.3G 0.02524 0.01849 0.006853 222 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.07s/it]
all 79 172 0.716 0.598 0.644 0.371
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
170/2999 13.3G 0.02458 0.017 0.004295 193 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.03s/it]
all 79 172 0.589 0.633 0.537 0.318
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
171/2999 13.3G 0.02478 0.01661 0.003602 186 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.02it/s]
all 79 172 0.685 0.599 0.582 0.355
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
172/2999 13.3G 0.02531 0.01569 0.005721 175 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.06s/it]
all 79 172 0.687 0.607 0.62 0.367
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
173/2999 13.3G 0.02561 0.0182 0.005804 214 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.01it/s]
all 79 172 0.769 0.533 0.642 0.393
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
174/2999 13.3G 0.02489 0.01687 0.006483 180 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.647 0.56 0.638 0.39
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
175/2999 13.3G 0.02413 0.01744 0.006103 222 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.11it/s]
all 79 172 0.722 0.537 0.659 0.392
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
176/2999 13.3G 0.02617 0.0165 0.004711 183 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.06s/it]
all 79 172 0.658 0.547 0.588 0.34
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
177/2999 13.3G 0.02391 0.0172 0.006477 160 640: 100% 5/5 [00:04<00:00, 1.19it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.21it/s]
all 79 172 0.722 0.485 0.57 0.329
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
178/2999 13.3G 0.02595 0.0167 0.004114 203 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.03s/it]
all 79 172 0.75 0.421 0.499 0.285
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
179/2999 13.3G 0.02545 0.01615 0.005344 185 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.10it/s]
all 79 172 0.862 0.506 0.609 0.366
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
180/2999 13.3G 0.0244 0.01572 0.005259 219 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.02it/s]
all 79 172 0.848 0.563 0.642 0.371
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
181/2999 13.3G 0.02403 0.01656 0.004655 198 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.04s/it]
all 79 172 0.708 0.582 0.607 0.349
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
182/2999 13.3G 0.025 0.01808 0.005477 238 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.10it/s]
all 79 172 0.756 0.603 0.637 0.389
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
183/2999 13.3G 0.02387 0.01685 0.007013 194 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.00it/s]
all 79 172 0.75 0.625 0.693 0.435
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
184/2999 13.3G 0.02442 0.01655 0.005348 242 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.19it/s]
all 79 172 0.754 0.529 0.602 0.362
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
185/2999 13.3G 0.02413 0.01696 0.0051 175 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.14it/s]
all 79 172 0.843 0.546 0.663 0.395
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
186/2999 13.3G 0.02388 0.01608 0.003896 203 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.82 0.542 0.656 0.401
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
187/2999 13.3G 0.02426 0.01638 0.005311 166 640: 100% 5/5 [00:03<00:00, 1.26it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.72s/it]
all 79 172 0.802 0.555 0.616 0.374
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
188/2999 13.3G 0.02368 0.01607 0.005871 181 640: 100% 5/5 [00:03<00:00, 1.26it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.736 0.628 0.667 0.394
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
189/2999 13.3G 0.0257 0.01712 0.006646 205 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.01s/it]
all 79 172 0.781 0.441 0.596 0.332
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
190/2999 13.3G 0.02485 0.01648 0.005049 222 640: 100% 5/5 [00:03<00:00, 1.32it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.17it/s]
all 79 172 0.76 0.466 0.549 0.304
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
191/2999 13.3G 0.02291 0.01608 0.005364 217 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.08it/s]
all 79 172 0.696 0.473 0.51 0.315
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
192/2999 13.3G 0.02464 0.01737 0.006162 205 640: 100% 5/5 [00:03<00:00, 1.32it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.17s/it]
all 79 172 0.696 0.493 0.535 0.325
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
193/2999 13.3G 0.02452 0.01706 0.005202 197 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.14it/s]
all 79 172 0.801 0.429 0.562 0.332
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
194/2999 13.3G 0.02326 0.01667 0.004886 190 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.05s/it]
all 79 172 0.768 0.432 0.531 0.294
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
195/2999 13.3G 0.02424 0.01685 0.005938 231 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.02it/s]
all 79 172 0.87 0.384 0.528 0.319
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
196/2999 13.3G 0.02383 0.01643 0.005414 160 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.00it/s]
all 79 172 0.752 0.584 0.617 0.351
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
197/2999 13.3G 0.02474 0.01629 0.004213 195 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.14it/s]
all 79 172 0.819 0.516 0.617 0.336
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
198/2999 13.3G 0.02378 0.01605 0.004158 202 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.01it/s]
all 79 172 0.691 0.492 0.623 0.353
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
199/2999 13.3G 0.02474 0.01601 0.005006 196 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.11it/s]
all 79 172 0.845 0.481 0.57 0.349
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
200/2999 13.3G 0.02325 0.01525 0.004579 200 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.04s/it]
all 79 172 0.679 0.425 0.496 0.301
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
201/2999 13.3G 0.02245 0.01579 0.00427 226 640: 100% 5/5 [00:04<00:00, 1.23it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.72s/it]
all 79 172 0.743 0.428 0.494 0.303
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
202/2999 13.3G 0.02279 0.01541 0.007018 163 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.24s/it]
all 79 172 0.795 0.485 0.547 0.328
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
203/2999 13.3G 0.02381 0.01648 0.004034 192 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.10it/s]
all 79 172 0.695 0.529 0.619 0.37
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
204/2999 13.3G 0.02344 0.01555 0.003905 196 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.34s/it]
all 79 172 0.81 0.49 0.566 0.348
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
205/2999 13.3G 0.02414 0.01678 0.005969 225 640: 100% 5/5 [00:04<00:00, 1.21it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.77s/it]
all 79 172 0.793 0.499 0.551 0.343
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
206/2999 13.3G 0.02397 0.01629 0.005902 211 640: 100% 5/5 [00:04<00:00, 1.20it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.13s/it]
all 79 172 0.848 0.569 0.645 0.394
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
207/2999 13.3G 0.02395 0.01554 0.005462 170 640: 100% 5/5 [00:04<00:00, 1.24it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.10it/s]
all 79 172 0.826 0.536 0.643 0.394
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
208/2999 13.3G 0.02359 0.0166 0.005498 224 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.747 0.591 0.647 0.398
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
209/2999 13.3G 0.02367 0.01604 0.00558 225 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.02s/it]
all 79 172 0.747 0.53 0.614 0.378
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
210/2999 13.3G 0.02559 0.01545 0.005393 171 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.15it/s]
all 79 172 0.787 0.519 0.618 0.384
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
211/2999 13.3G 0.02273 0.0167 0.005695 202 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.10it/s]
all 79 172 0.79 0.512 0.612 0.386
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
212/2999 13.3G 0.02254 0.01724 0.005585 208 640: 100% 5/5 [00:03<00:00, 1.32it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.13s/it]
all 79 172 0.788 0.571 0.639 0.375
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
213/2999 13.3G 0.02419 0.01494 0.005212 207 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.10it/s]
all 79 172 0.842 0.53 0.648 0.364
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
214/2999 13.3G 0.02568 0.01664 0.004367 183 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.68 0.579 0.58 0.347
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
215/2999 13.3G 0.02463 0.01619 0.005758 209 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.69 0.573 0.583 0.335
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
216/2999 13.3G 0.02261 0.01598 0.005402 208 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.07it/s]
all 79 172 0.798 0.543 0.61 0.374
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
217/2999 13.3G 0.02275 0.01476 0.004736 160 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.07it/s]
all 79 172 0.714 0.539 0.56 0.341
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
218/2999 13.3G 0.02411 0.01569 0.004459 187 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.07it/s]
all 79 172 0.78 0.521 0.564 0.35
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
219/2999 13.3G 0.02208 0.01444 0.00422 173 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.627 0.504 0.557 0.353
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
220/2999 13.3G 0.023 0.0164 0.004591 218 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.01s/it]
all 79 172 0.838 0.465 0.597 0.36
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
221/2999 13.3G 0.02155 0.01479 0.003508 192 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.00s/it]
all 79 172 0.807 0.548 0.654 0.384
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
222/2999 13.3G 0.02316 0.01552 0.004726 217 640: 100% 5/5 [00:03<00:00, 1.32it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.747 0.677 0.707 0.39
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
223/2999 13.3G 0.0233 0.01691 0.007177 180 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.03s/it]
all 79 172 0.742 0.567 0.639 0.372
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
224/2999 13.3G 0.02206 0.01515 0.00455 173 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.76 0.499 0.619 0.368
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
225/2999 13.3G 0.0244 0.01643 0.004599 181 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.84 0.505 0.606 0.352
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
226/2999 13.3G 0.02239 0.01505 0.005543 201 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.02it/s]
all 79 172 0.676 0.555 0.635 0.378
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
227/2999 13.3G 0.02369 0.01622 0.005755 177 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.10it/s]
all 79 172 0.729 0.595 0.608 0.356
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
228/2999 13.3G 0.02266 0.01487 0.004909 204 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.13s/it]
all 79 172 0.665 0.54 0.549 0.345
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
229/2999 13.3G 0.0233 0.01656 0.00652 172 640: 100% 5/5 [00:04<00:00, 1.17it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.10s/it]
all 79 172 0.769 0.492 0.603 0.384
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
230/2999 13.3G 0.02232 0.01574 0.004084 183 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.13s/it]
all 79 172 0.678 0.568 0.599 0.381
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
231/2999 13.3G 0.0231 0.01623 0.004324 230 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.08it/s]
all 79 172 0.756 0.594 0.632 0.394
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
232/2999 13.3G 0.02286 0.01546 0.00569 185 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.789 0.53 0.608 0.378
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
233/2999 13.3G 0.0229 0.01477 0.004437 149 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.04s/it]
all 79 172 0.683 0.499 0.573 0.338
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
234/2999 13.3G 0.0234 0.01698 0.005284 215 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.08it/s]
all 79 172 0.771 0.472 0.544 0.317
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
235/2999 13.3G 0.02219 0.0148 0.004658 186 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.17s/it]
all 79 172 0.717 0.49 0.543 0.333
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
236/2999 13.3G 0.02321 0.0145 0.005254 161 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.09it/s]
all 79 172 0.73 0.506 0.55 0.328
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
237/2999 13.3G 0.02371 0.01623 0.004812 204 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.693 0.511 0.554 0.343
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
238/2999 13.3G 0.02394 0.01551 0.004886 161 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.14it/s]
all 79 172 0.791 0.444 0.594 0.376
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
239/2999 13.3G 0.02325 0.0154 0.004177 195 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.00it/s]
all 79 172 0.758 0.629 0.657 0.421
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
240/2999 13.3G 0.02192 0.0154 0.003914 202 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.10it/s]
all 79 172 0.683 0.556 0.631 0.388
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
241/2999 13.3G 0.02239 0.01488 0.007844 184 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.13it/s]
all 79 172 0.694 0.441 0.561 0.346
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
242/2999 13.3G 0.02268 0.0156 0.005672 179 640: 100% 5/5 [00:03<00:00, 1.25it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.61s/it]
all 79 172 0.841 0.46 0.552 0.341
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
243/2999 13.3G 0.02341 0.0154 0.005667 185 640: 100% 5/5 [00:04<00:00, 1.23it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.14s/it]
all 79 172 0.662 0.475 0.542 0.315
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
244/2999 13.3G 0.02246 0.01587 0.005929 182 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.674 0.509 0.56 0.354
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
245/2999 13.3G 0.02381 0.01481 0.005467 151 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.14it/s]
all 79 172 0.78 0.568 0.636 0.378
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
246/2999 13.3G 0.02173 0.01692 0.005114 238 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.02it/s]
all 79 172 0.639 0.566 0.578 0.352
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
247/2999 13.3G 0.02268 0.01652 0.004531 200 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.04s/it]
all 79 172 0.623 0.536 0.525 0.321
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
248/2999 13.3G 0.02235 0.01425 0.005001 192 640: 100% 5/5 [00:04<00:00, 1.25it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.08it/s]
all 79 172 0.697 0.525 0.599 0.386
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
249/2999 13.3G 0.02352 0.01621 0.003642 222 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.08it/s]
all 79 172 0.625 0.49 0.574 0.378
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
250/2999 13.3G 0.02184 0.01575 0.00716 221 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.09it/s]
all 79 172 0.657 0.513 0.563 0.364
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
251/2999 13.3G 0.02174 0.01629 0.004422 242 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.668 0.522 0.576 0.378
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
252/2999 13.3G 0.02075 0.01556 0.004782 225 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.04s/it]
all 79 172 0.705 0.523 0.559 0.341
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
253/2999 13.3G 0.02199 0.01595 0.003561 159 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.01it/s]
all 79 172 0.818 0.495 0.577 0.366
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
254/2999 13.3G 0.02302 0.01519 0.005618 225 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.17it/s]
all 79 172 0.818 0.511 0.561 0.34
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
255/2999 13.3G 0.02252 0.01508 0.004516 209 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.77 0.508 0.567 0.356
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
256/2999 13.3G 0.02207 0.01442 0.005011 174 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.02it/s]
all 79 172 0.763 0.515 0.566 0.366
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
257/2999 13.3G 0.02165 0.01472 0.005958 205 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.16it/s]
all 79 172 0.737 0.488 0.564 0.35
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
258/2999 13.3G 0.02085 0.01448 0.00546 197 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.11it/s]
all 79 172 0.666 0.512 0.608 0.374
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
259/2999 13.3G 0.02247 0.01579 0.004364 179 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.12it/s]
all 79 172 0.845 0.575 0.625 0.382
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
260/2999 13.3G 0.02216 0.01446 0.004768 206 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.07it/s]
all 79 172 0.717 0.526 0.613 0.373
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
261/2999 13.3G 0.02163 0.01531 0.004534 214 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.15s/it]
all 79 172 0.667 0.577 0.606 0.385
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
262/2999 13.3G 0.02124 0.0156 0.004753 214 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.691 0.504 0.557 0.333
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
263/2999 13.3G 0.02157 0.01402 0.003773 168 640: 100% 5/5 [00:04<00:00, 1.22it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.78s/it]
all 79 172 0.692 0.618 0.621 0.369
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
264/2999 13.3G 0.02115 0.01505 0.004787 219 640: 100% 5/5 [00:04<00:00, 1.14it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.54s/it]
all 79 172 0.703 0.587 0.617 0.358
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
265/2999 13.3G 0.02119 0.01501 0.003825 230 640: 100% 5/5 [00:04<00:00, 1.24it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.22s/it]
all 79 172 0.633 0.557 0.562 0.34
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
266/2999 13.3G 0.02232 0.01488 0.005141 193 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.20it/s]
all 79 172 0.679 0.553 0.55 0.348
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
267/2999 13.3G 0.02236 0.01423 0.003963 213 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.02s/it]
all 79 172 0.658 0.539 0.565 0.347
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
268/2999 13.3G 0.02109 0.01543 0.005812 185 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.17it/s]
all 79 172 0.631 0.529 0.567 0.36
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
269/2999 13.3G 0.02206 0.01438 0.004758 192 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.662 0.507 0.577 0.362
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
270/2999 13.3G 0.02145 0.01416 0.006062 183 640: 100% 5/5 [00:04<00:00, 1.23it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.62s/it]
all 79 172 0.808 0.489 0.598 0.377
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
271/2999 13.3G 0.02128 0.01363 0.004508 210 640: 100% 5/5 [00:04<00:00, 1.23it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.14s/it]
all 79 172 0.788 0.498 0.592 0.373
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
272/2999 13.3G 0.02323 0.01416 0.004713 181 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.02it/s]
all 79 172 0.68 0.556 0.591 0.383
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
273/2999 13.3G 0.02241 0.01433 0.005521 175 640: 100% 5/5 [00:03<00:00, 1.32it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.10it/s]
all 79 172 0.72 0.539 0.587 0.375
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
274/2999 13.3G 0.02156 0.01502 0.005296 187 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.07it/s]
all 79 172 0.7 0.516 0.578 0.371
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
275/2999 13.3G 0.02187 0.01516 0.004791 177 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.15s/it]
all 79 172 0.676 0.566 0.574 0.361
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
276/2999 13.3G 0.02218 0.01589 0.004767 229 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.699 0.514 0.59 0.365
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
277/2999 13.3G 0.02195 0.01649 0.004888 205 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.737 0.607 0.652 0.416
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
278/2999 13.3G 0.02141 0.01452 0.003921 173 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.01it/s]
all 79 172 0.657 0.603 0.639 0.408
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
279/2999 13.3G 0.02054 0.01555 0.004726 229 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.05s/it]
all 79 172 0.778 0.508 0.665 0.41
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
280/2999 13.3G 0.02179 0.01484 0.004448 188 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.793 0.552 0.646 0.392
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
281/2999 13.3G 0.0219 0.01377 0.006125 195 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.13s/it]
all 79 172 0.744 0.546 0.6 0.392
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
282/2999 13.3G 0.02197 0.01625 0.004896 215 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.678 0.521 0.56 0.347
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
283/2999 13.3G 0.02083 0.01468 0.005276 143 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.52s/it]
all 79 172 0.634 0.553 0.573 0.343
Stopping training early as no improvement observed in last 100 epochs. Best results observed at epoch 183, best model saved as best.pt.
To update EarlyStopping(patience=100) pass a new patience value, i.e. `python train.py --patience 300` or use `--patience 0` to disable EarlyStopping.
284 epochs completed in 0.433 hours.
Optimizer stripped from runs/train/exp/weights/last.pt, 14.5MB
Optimizer stripped from runs/train/exp/weights/best.pt, 14.5MB
Validating runs/train/exp/weights/best.pt...
Fusing layers...
Model summary: 157 layers, 7020913 parameters, 0 gradients, 15.8 GFLOPs
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.20s/it]
all 79 172 0.75 0.624 0.693 0.434
cadeira 79 78 0.714 0.462 0.527 0.255
geladeira 79 4 0.763 0.816 0.895 0.64
monitor 79 36 0.735 0.463 0.572 0.338
quadro 79 54 0.788 0.756 0.78 0.505
```
</details>
### Evidências do treinamento

## Roboflow
https://app.roboflow.com/wilsoncesarschool/projetofinalmodelosconexionistas/1
## HuggingFace
https://huggingface.co/wilsonsob/projetoFinal
|
pallavi176/bert-fine-tuned-cola
|
pallavi176
| 2022-11-05T11:55:11Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-05T11:33:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-fine-tuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: train
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5778590180299453
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8136
- Matthews Correlation: 0.5779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4785 | 1.0 | 1069 | 0.5265 | 0.4996 |
| 0.3162 | 2.0 | 2138 | 0.6626 | 0.5701 |
| 0.1779 | 3.0 | 3207 | 0.8136 | 0.5779 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Mallik/distilbert-base-uncased-finetuned-emotion
|
Mallik
| 2022-11-05T10:59:49Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-05T09:39:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2128
- Accuracy: 0.925
- F1: 0.9248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8215 | 1.0 | 250 | 0.3033 | 0.9105 | 0.9078 |
| 0.2435 | 2.0 | 500 | 0.2128 | 0.925 | 0.9248 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Tokenizers 0.13.1
|
nguyenvulebinh/wav2vec2-noisy
|
nguyenvulebinh
| 2022-11-05T10:49:39Z | 46 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"speech",
"en",
"dataset:librispeech_asr",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-11-05T10:16:55Z |
---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: cc-by-nc-4.0
---
# Wav2Vec2-Base with audio augmentation
The base model pretrained on 16kHz sampled speech-augmented audio. The audio comes from 960h Libris dataset that is augmented as follows:

The ambient noise dataset includes MUSAN and WHAM (a total of 189 hours, including music, speech, and environmental noise). The reverb dataset is from Room RIR and BUT Speech@FIT (2650 room impulse response signals).
# Model Parameters License
The model parameters are made available for non-commercial use only under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. You can find details at: https://creativecommons.org/licenses/by-nc/4.0/legalcode
### Contact
nguyenvulebinh@gmail.com
[](https://twitter.com/intent/follow?screen_name=nguyenvulebinh)
|
jonathang/dog_breed
|
jonathang
| 2022-11-05T10:16:42Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2022-11-02T03:00:36Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
Maheshnma/distilbert-base-uncased-finetuned-emotion
|
Maheshnma
| 2022-11-05T09:45:57Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-05T09:27:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.9225964839443589
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2209
- Accuracy: 0.9225
- F1: 0.9226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8477 | 1.0 | 250 | 0.3204 | 0.9025 | 0.9000 |
| 0.2559 | 2.0 | 500 | 0.2209 | 0.9225 | 0.9226 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
OpenBioML/LibreFold_AF2_reproduction
|
OpenBioML
| 2022-11-05T08:56:37Z | 0 | 0 | null |
[
"AlphaFold",
"protein model",
"license:cc-by-4.0",
"region:us"
] | null | 2022-10-20T17:22:18Z |
---
tags:
- AlphaFold
- protein model
license: cc-by-4.0
---
# LibreFold AF2 reproduction
Text
## Intro
Text
## Model description
Text
## Intended uses & limitations
Text
### How to use
Text
### Limitations and bias
Text
## Training data
Text
### Collection process
Text
## Training procedure
### Preprocessing
Text
### BibTeX entry and citation info
```bibtex
Text
```
|
sd-concepts-library/gt-color-paint-2
|
sd-concepts-library
| 2022-11-05T07:41:31Z | 0 | 7 | null |
[
"license:mit",
"region:us"
] | null | 2022-11-05T07:41:26Z |
---
license: mit
---
### GT color paint_2 on Stable Diffusion
This is the `<my-color-paint-GT>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
Shunian/yelp_review_classification
|
Shunian
| 2022-11-05T07:21:17Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:yelp_review_full",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-05T06:38:54Z |
---
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: yelp_review_classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: yelp_review_full
type: yelp_review_full
config: yelp_review_full
split: train
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.6852
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yelp_review_classification
This model was trained from scratch on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8517
- Accuracy: 0.6852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:------:|:--------:|:---------------:|
| 0.7149 | 1.0 | 40625 | 0.6889 | 0.7167 |
| 0.6501 | 2.0 | 81250 | 0.6967 | 0.6979 |
| 0.5547 | 3.0 | 121875 | 0.6915 | 0.7377 |
| 0.5375 | 4.0 | 162500 | 0.6895 | 0.7611 |
| 0.4386 | 5.0 | 203125 | 0.8517 | 0.6852 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu102
- Datasets 2.5.2
- Tokenizers 0.12.1
|
MarkGG/Romance-baseline
|
MarkGG
| 2022-11-05T05:16:39Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-05T03:22:25Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Romance-baseline
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Romance-baseline
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.5909
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.94 | 15 | 10.7009 |
| No log | 1.94 | 30 | 10.0799 |
| No log | 2.94 | 45 | 9.6627 |
| No log | 3.94 | 60 | 9.4619 |
| No log | 4.94 | 75 | 9.2970 |
| No log | 5.94 | 90 | 9.0919 |
| No log | 6.94 | 105 | 8.9071 |
| No log | 7.94 | 120 | 8.7240 |
| No log | 8.94 | 135 | 8.5485 |
| No log | 9.94 | 150 | 8.3952 |
| No log | 10.94 | 165 | 8.2469 |
| No log | 11.94 | 180 | 8.1193 |
| No log | 12.94 | 195 | 7.9918 |
| No log | 13.94 | 210 | 7.8662 |
| No log | 14.94 | 225 | 7.7394 |
| No log | 15.94 | 240 | 7.6219 |
| No log | 16.94 | 255 | 7.5135 |
| No log | 17.94 | 270 | 7.4110 |
| No log | 18.94 | 285 | 7.3021 |
| No log | 19.94 | 300 | 7.2021 |
| No log | 20.94 | 315 | 7.1276 |
| No log | 21.94 | 330 | 7.0278 |
| No log | 22.94 | 345 | 6.9627 |
| No log | 23.94 | 360 | 6.8806 |
| No log | 24.94 | 375 | 6.8214 |
| No log | 25.94 | 390 | 6.7725 |
| No log | 26.94 | 405 | 6.7101 |
| No log | 27.94 | 420 | 6.6792 |
| No log | 28.94 | 435 | 6.6361 |
| No log | 29.94 | 450 | 6.5950 |
| No log | 30.94 | 465 | 6.5745 |
| No log | 31.94 | 480 | 6.5469 |
| No log | 32.94 | 495 | 6.5520 |
| No log | 33.94 | 510 | 6.5121 |
| No log | 34.94 | 525 | 6.5255 |
| No log | 35.94 | 540 | 6.5179 |
| No log | 36.94 | 555 | 6.5079 |
| No log | 37.94 | 570 | 6.5138 |
| No log | 38.94 | 585 | 6.5170 |
| No log | 39.94 | 600 | 6.4807 |
| No log | 40.94 | 615 | 6.5338 |
| No log | 41.94 | 630 | 6.4960 |
| No log | 42.94 | 645 | 6.5342 |
| No log | 43.94 | 660 | 6.5119 |
| No log | 44.94 | 675 | 6.5614 |
| No log | 45.94 | 690 | 6.5235 |
| No log | 46.94 | 705 | 6.5388 |
| No log | 47.94 | 720 | 6.5574 |
| No log | 48.94 | 735 | 6.5581 |
| No log | 49.94 | 750 | 6.5909 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/pcbg9
|
huggingtweets
| 2022-11-05T04:20:29Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-05T03:45:35Z |
---
language: en
thumbnail: http://www.huggingtweets.com/pcbg9/1667622025279/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1225770876626460673/9joxA6TW_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">PCBoyGames</div>
<div style="text-align: center; font-size: 14px;">@pcbg9</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from PCBoyGames.
| Data | PCBoyGames |
| --- | --- |
| Tweets downloaded | 547 |
| Retweets | 24 |
| Short tweets | 50 |
| Tweets kept | 473 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/672epqcs/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @pcbg9's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2iu5ehsq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2iu5ehsq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/pcbg9')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
nubby/kagura_tohru-artist
|
nubby
| 2022-11-05T03:58:23Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-02T23:04:22Z |
---
license: creativeml-openrail-m
---
waifu diffusion 1.3 base model with dreambooth training on images drawn by the artist "kagura_tohru"
Can be used in StableDiffusion, including the extremely popular Web UI by Automatic1111, like any other model by placing the .CKPT file in the correct directory. Please consult the documentation for your installation of StableDiffusion for more specific instructions.
Use "m_kgrartist" to activate
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
MarkGG/Romance-cleaned-1
|
MarkGG
| 2022-11-05T03:10:38Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-26T03:35:43Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Romance-cleaned-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Romance-cleaned-1
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7175
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.97 | 29 | 9.9497 |
| No log | 1.97 | 58 | 9.1816 |
| No log | 2.97 | 87 | 8.5947 |
| No log | 3.97 | 116 | 8.2217 |
| No log | 4.97 | 145 | 7.8354 |
| No log | 5.97 | 174 | 7.5075 |
| No log | 6.97 | 203 | 7.2112 |
| No log | 7.97 | 232 | 6.9077 |
| No log | 8.97 | 261 | 6.5994 |
| No log | 9.97 | 290 | 6.3077 |
| No log | 10.97 | 319 | 6.0416 |
| No log | 11.97 | 348 | 5.8126 |
| No log | 12.97 | 377 | 5.6197 |
| No log | 13.97 | 406 | 5.4789 |
| No log | 14.97 | 435 | 5.3665 |
| No log | 15.97 | 464 | 5.2738 |
| No log | 16.97 | 493 | 5.1942 |
| No log | 17.97 | 522 | 5.1382 |
| No log | 18.97 | 551 | 5.0784 |
| No log | 19.97 | 580 | 5.0347 |
| No log | 20.97 | 609 | 4.9873 |
| No log | 21.97 | 638 | 4.9514 |
| No log | 22.97 | 667 | 4.9112 |
| No log | 23.97 | 696 | 4.8838 |
| No log | 24.97 | 725 | 4.8468 |
| No log | 25.97 | 754 | 4.8221 |
| No log | 26.97 | 783 | 4.7996 |
| No log | 27.97 | 812 | 4.7815 |
| No log | 28.97 | 841 | 4.7606 |
| No log | 29.97 | 870 | 4.7394 |
| No log | 30.97 | 899 | 4.7167 |
| No log | 31.97 | 928 | 4.7140 |
| No log | 32.97 | 957 | 4.6910 |
| No log | 33.97 | 986 | 4.6844 |
| No log | 34.97 | 1015 | 4.6765 |
| No log | 35.97 | 1044 | 4.6687 |
| No log | 36.97 | 1073 | 4.6721 |
| No log | 37.97 | 1102 | 4.6724 |
| No log | 38.97 | 1131 | 4.6629 |
| No log | 39.97 | 1160 | 4.6772 |
| No log | 40.97 | 1189 | 4.6795 |
| No log | 41.97 | 1218 | 4.6788 |
| No log | 42.97 | 1247 | 4.6832 |
| No log | 43.97 | 1276 | 4.6954 |
| No log | 44.97 | 1305 | 4.7009 |
| No log | 45.97 | 1334 | 4.7082 |
| No log | 46.97 | 1363 | 4.7140 |
| No log | 47.97 | 1392 | 4.7158 |
| No log | 48.97 | 1421 | 4.7181 |
| No log | 49.97 | 1450 | 4.7175 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/transgirltoking
|
huggingtweets
| 2022-11-05T02:57:28Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-05T02:56:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/transgirltoking/1667617044734/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1587630117890949121/Uo9ukfaP_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">fallmoder</div>
<div style="text-align: center; font-size: 14px;">@transgirltoking</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from fallmoder.
| Data | fallmoder |
| --- | --- |
| Tweets downloaded | 950 |
| Retweets | 280 |
| Short tweets | 97 |
| Tweets kept | 573 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/279zhs1a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @transgirltoking's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ipbrk4ae) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ipbrk4ae/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/transgirltoking')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
hazrulakmal/distilgpt2-ecb-finetuned
|
hazrulakmal
| 2022-11-05T01:25:33Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-03T19:14:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-ecb-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-ecb-finetuned
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9655 | 1.0 | 17714 | 0.9472 |
| 0.9121 | 2.0 | 35428 | 0.8986 |
| 0.8682 | 3.0 | 53142 | 0.8705 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Nanohana/efficietnet-lstm-image-captioning
|
Nanohana
| 2022-11-05T00:28:47Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-11-04T22:51:32Z |
---
title: {{image-captioning}}
sdk: {{gradio}}
app_file: app.py
---
# image-captioning
This repository contains an image captioning system that is composed of:
- Pretrained EfficientNet-B0 in ImageNet
- Word Embedding with Flickr8k vocabulary
- 1 layer LSTM
It was trained for 100 epoches (CNN weights were frozen) and the vocabulary was built with words that appear at least 5 times in the Flickr8k dataset.

|
huggingtweets/hellgirl2004
|
huggingtweets
| 2022-11-05T00:11:47Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-05T00:11:39Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1581781821414686722/lvOpNTQf_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">🎃 rei 💀</div>
<div style="text-align: center; font-size: 14px;">@hellgirl2004</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 🎃 rei 💀.
| Data | 🎃 rei 💀 |
| --- | --- |
| Tweets downloaded | 3168 |
| Retweets | 1517 |
| Short tweets | 584 |
| Tweets kept | 1067 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/m0ohu4nr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hellgirl2004's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3mcqxcff) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3mcqxcff/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/hellgirl2004')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mpjan/msmarco-distilbert-base-tas-b-mmarco-pt-300k
|
mpjan
| 2022-11-05T00:08:25Z | 8 | 4 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"pt",
"dataset:unicamp-dl/mmarco",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-05T00:03:16Z |
---
pipeline_tag: sentence-similarity
language:
- 'pt'
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- 'unicamp-dl/mmarco'
---
# mpjan/msmarco-distilbert-base-tas-b-mmarco-pt-300k
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
It is a fine-tuning of [sentence-transformers/msmarco-distilbert-base-tas-b](https://huggingface.co/sentence-transformers/msmarco-distilbert-base-tas-b) on the first 300k triplets of the Portuguese subset in [unicamp-dl/mmarco](https://huggingface.co/datasets/unicamp-dl/mmarco).
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('mpjan/msmarco-distilbert-base-tas-b-mmarco-pt-300k')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('mpjan/msmarco-distilbert-base-tas-b-mmarco-pt-300k')
model = AutoModel.from_pretrained('mpjan/msmarco-distilbert-base-tas-b-mmarco-pt-300k')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 18750 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 9375,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
dodge99/q-FrozenLake-v1-4x4-Slippery
|
dodge99
| 2022-11-04T23:27:20Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-04T23:08:03Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.58 +/- 0.49
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="dodge99/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
mariopeng/phoneT5
|
mariopeng
| 2022-11-04T22:53:02Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-17T20:01:55Z |
# Description
Transfer learning on T5 to translate English graphemes to IPA (International Phonetic Alphabet).
- Include "translate to IPA: " as prefix for prompting.
|
jinhybr/OCR-DocVQA-Donut
|
jinhybr
| 2022-11-04T22:23:22Z | 122 | 11 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"donut",
"image-to-text",
"vision",
"document-question-answering",
"arxiv:2111.15664",
"license:mit",
"endpoints_compatible",
"region:us"
] |
document-question-answering
| 2022-11-04T22:11:29Z |
---
license: mit
pipeline_tag: document-question-answering
tags:
- donut
- image-to-text
- vision
widget:
- text: "What is the invoice number?"
src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png"
- text: "What is the purchase amount?"
src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/contract.jpeg"
---
# Donut (base-sized model, fine-tuned on DocVQA)
Donut model fine-tuned on DocVQA. It was introduced in the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewok et al. and first released in [this repository](https://github.com/clovaai/donut).
Disclaimer: The team releasing Donut did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Donut consists of a vision encoder (Swin Transformer) and a text decoder (BART). Given an image, the encoder first encodes the image into a tensor of embeddings (of shape batch_size, seq_len, hidden_size), after which the decoder autoregressively generates text, conditioned on the encoding of the encoder.

## Intended uses & limitations
This model is fine-tuned on DocVQA, a document visual question answering dataset.
We refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/donut) which includes code examples.
|
Arnavaz/gpt2-arnavaz-beta
|
Arnavaz
| 2022-11-04T20:55:31Z | 48 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"Farsi",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-04T12:57:55Z |
---
language: fa
license: apache-2.0
tags:
- Farsi
---
# Arnavāz (ارنواز)
**Model Description:** Arnavaz/gpt-arnavaz-beta is gpt2 language model that is fine-tuned using [bolbolzaban/gpt2-persian](https://huggingface.co/bolbolzaban/gpt2-persian) pretrained model.
[bolbolzaban/gpt2-persian](https://huggingface.co/bolbolzaban/gpt2-persian) has been trained similar to [gpt2-medium](https://huggingface.co/gpt2-medium) with differences in context size, tokenizer and language [(Read more)](https://medium.com/@khashei/a-not-so-dangerous-ai-in-the-persian-language-39172a641c84).
- **Developed by:** [Rezā Latifi](https://rezalatifi.ir)
- **Model Type:** Transformer-based language model
- **Language:** Persian (All characters other than the Persian alphabet are replaced with special tokens)
- **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE)
- **Related Models:** [bolbolzaban/gpt2-persian](https://huggingface.co/bolbolzaban/gpt2-persian), [gpt2-medium](https://huggingface.co/gpt2-medium)
- **Resources for more information:**
- [Arnavaz Website](https://openai.com/blog/better-language-models/)
## How to utilize
Using a pipeline for text generation, Arnavaz can be utilized like this:
```python
from transformers import pipeline, AutoTokenizer, GPT2LMHeadModel, AutoConfig
tokenizer = AutoTokenizer.from_pretrained('Arnavaz/gpt2-arnavaz-beta')
model = GPT2LMHeadModel.from_pretrained('Arnavaz/gpt2-arnavaz-beta')
config = AutoConfig.from_pretrained('Arnavaz/gpt2-arnavaz-beta', max_length=512)
generator = pipeline('text-generation', model, tokenizer=tokenizer, config=config)
def getEloquent(ineloquent):
result = generator(f"[BOS]{ineloquent}[SEP]")[0]['generated_text']
return result[result.find('[SEP]')+5:]
sample = getEloquent('استفاده از کاغذ پاپیروس برای نوشتن کتاب از حدود دو هزار سال قبل از میلاد در مصر رایج شد.')
```
|
kabilanp942/t5-finetuned-cnn-dailymail-english
|
kabilanp942
| 2022-11-04T20:50:41Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"Summarization",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-04T17:37:38Z |
---
license: apache-2.0
tags:
- Summarization
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-finetuned-cnn-dailymail-english
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
config: 3.0.0
split: train
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.8782
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-finetuned-cnn-dailymail-english
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8462
- Rouge1: 24.8782
- Rouge2: 11.9422
- Rougel: 20.5616
- Rougelsum: 23.445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.0856 | 1.0 | 35890 | 1.8462 | 24.8782 | 11.9422 | 20.5616 | 23.445 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
jrtec/jrtec-distilroberta-base-mrpc-glue-omar-espejel
|
jrtec
| 2022-11-04T20:31:03Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-04T15:53:58Z |
---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: jrtec-distilroberta-base-mrpc-glue-omar-espejel
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: datasetX
type: glue
config: mrpc
split: train
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8161764705882353
- name: F1
type: f1
value: 0.8747913188647747
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jrtec-distilroberta-base-mrpc-glue-omar-espejel
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4901
- Accuracy: 0.8162
- F1: 0.8748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4845 | 1.09 | 500 | 0.4901 | 0.8162 | 0.8748 |
| 0.3706 | 2.18 | 1000 | 0.6421 | 0.8162 | 0.8691 |
| 0.2003 | 3.27 | 1500 | 0.9711 | 0.8162 | 0.8760 |
| 0.1281 | 4.36 | 2000 | 0.8224 | 0.8480 | 0.8893 |
| 0.0717 | 5.45 | 2500 | 1.1803 | 0.8113 | 0.8511 |
| 0.0344 | 6.54 | 3000 | 1.1759 | 0.8480 | 0.8935 |
| 0.0277 | 7.63 | 3500 | 1.2140 | 0.8456 | 0.8927 |
| 0.0212 | 8.71 | 4000 | 1.0895 | 0.8554 | 0.8974 |
| 0.0071 | 9.8 | 4500 | 1.1849 | 0.8554 | 0.8991 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
rajistics/churn-model
|
rajistics
| 2022-11-04T20:18:12Z | 0 | 0 |
sklearn
|
[
"sklearn",
"skops",
"tabular-classification",
"license:mit",
"endpoints_compatible",
"region:us"
] |
tabular-classification
| 2022-10-15T01:25:28Z |
---
license: mit
library_name: sklearn
tags:
- sklearn
- skops
- tabular-classification
widget:
structuredData:
Contract:
- Two year
- Month-to-month
- One year
Dependents:
- 'Yes'
- 'No'
- 'No'
DeviceProtection:
- 'No'
- 'No'
- 'Yes'
InternetService:
- Fiber optic
- Fiber optic
- DSL
MonthlyCharges:
- 79.05
- 84.95
- 68.8
MultipleLines:
- 'Yes'
- 'Yes'
- 'Yes'
OnlineBackup:
- 'No'
- 'No'
- 'Yes'
OnlineSecurity:
- 'Yes'
- 'No'
- 'Yes'
PaperlessBilling:
- 'No'
- 'Yes'
- 'No'
Partner:
- 'Yes'
- 'Yes'
- 'No'
PaymentMethod:
- Bank transfer (automatic)
- Electronic check
- Bank transfer (automatic)
PhoneService:
- 'Yes'
- 'Yes'
- 'Yes'
SeniorCitizen:
- 0
- 0
- 0
StreamingMovies:
- 'No'
- 'No'
- 'No'
StreamingTV:
- 'No'
- 'Yes'
- 'No'
TechSupport:
- 'No'
- 'No'
- 'Yes'
TotalCharges:
- 5730.7
- 1378.25
- 4111.35
gender:
- Female
- Female
- Male
tenure:
- 72
- 16
- 63
---
# Model description
This is a Logistic Regression model trained on churn dataset.
## Intended uses & limitations
This model is not ready to be used in production.
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|--------------------------------------------|-----------------------------------------------------------------------------------|
| memory | |
| steps | [('preprocessor', ColumnTransformer(transformers=[('num',
Pipeline(steps=[('imputer',
SimpleImputer(strategy='median')),
('std_scaler',
StandardScaler())]),
['MonthlyCharges', 'TotalCharges', 'tenure']),
('cat', OneHotEncoder(handle_unknown='ignore'),
['SeniorCitizen', 'gender', 'Partner',
'Dependents', 'PhoneService', 'MultipleLines',
'InternetService', 'OnlineSecurity',
'OnlineBackup', 'DeviceProtection',
'TechSupport', 'StreamingTV',
'StreamingMovies', 'Contract',
'PaperlessBilling', 'PaymentMethod'])])), ('classifier', LogisticRegression(class_weight='balanced', max_iter=300))] |
| verbose | False |
| preprocessor | ColumnTransformer(transformers=[('num',
Pipeline(steps=[('imputer',
SimpleImputer(strategy='median')),
('std_scaler',
StandardScaler())]),
['MonthlyCharges', 'TotalCharges', 'tenure']),
('cat', OneHotEncoder(handle_unknown='ignore'),
['SeniorCitizen', 'gender', 'Partner',
'Dependents', 'PhoneService', 'MultipleLines',
'InternetService', 'OnlineSecurity',
'OnlineBackup', 'DeviceProtection',
'TechSupport', 'StreamingTV',
'StreamingMovies', 'Contract',
'PaperlessBilling', 'PaymentMethod'])]) |
| classifier | LogisticRegression(class_weight='balanced', max_iter=300) |
| preprocessor__n_jobs | |
| preprocessor__remainder | drop |
| preprocessor__sparse_threshold | 0.3 |
| preprocessor__transformer_weights | |
| preprocessor__transformers | [('num', Pipeline(steps=[('imputer', SimpleImputer(strategy='median')),
('std_scaler', StandardScaler())]), ['MonthlyCharges', 'TotalCharges', 'tenure']), ('cat', OneHotEncoder(handle_unknown='ignore'), ['SeniorCitizen', 'gender', 'Partner', 'Dependents', 'PhoneService', 'MultipleLines', 'InternetService', 'OnlineSecurity', 'OnlineBackup', 'DeviceProtection', 'TechSupport', 'StreamingTV', 'StreamingMovies', 'Contract', 'PaperlessBilling', 'PaymentMethod'])] |
| preprocessor__verbose | False |
| preprocessor__verbose_feature_names_out | True |
| preprocessor__num | Pipeline(steps=[('imputer', SimpleImputer(strategy='median')),
('std_scaler', StandardScaler())]) |
| preprocessor__cat | OneHotEncoder(handle_unknown='ignore') |
| preprocessor__num__memory | |
| preprocessor__num__steps | [('imputer', SimpleImputer(strategy='median')), ('std_scaler', StandardScaler())] |
| preprocessor__num__verbose | False |
| preprocessor__num__imputer | SimpleImputer(strategy='median') |
| preprocessor__num__std_scaler | StandardScaler() |
| preprocessor__num__imputer__add_indicator | False |
| preprocessor__num__imputer__copy | True |
| preprocessor__num__imputer__fill_value | |
| preprocessor__num__imputer__missing_values | nan |
| preprocessor__num__imputer__strategy | median |
| preprocessor__num__imputer__verbose | deprecated |
| preprocessor__num__std_scaler__copy | True |
| preprocessor__num__std_scaler__with_mean | True |
| preprocessor__num__std_scaler__with_std | True |
| preprocessor__cat__categories | auto |
| preprocessor__cat__drop | |
| preprocessor__cat__dtype | <class 'numpy.float64'> |
| preprocessor__cat__handle_unknown | ignore |
| preprocessor__cat__max_categories | |
| preprocessor__cat__min_frequency | |
| preprocessor__cat__sparse | True |
| classifier__C | 1.0 |
| classifier__class_weight | balanced |
| classifier__dual | False |
| classifier__fit_intercept | True |
| classifier__intercept_scaling | 1 |
| classifier__l1_ratio | |
| classifier__max_iter | 300 |
| classifier__multi_class | auto |
| classifier__n_jobs | |
| classifier__penalty | l2 |
| classifier__random_state | |
| classifier__solver | lbfgs |
| classifier__tol | 0.0001 |
| classifier__verbose | 0 |
| classifier__warm_start | False |
</details>
### Model Plot
The model plot is below.
<style>#sk-container-id-5 {color: black;background-color: white;}#sk-container-id-5 pre{padding: 0;}#sk-container-id-5 div.sk-toggleable {background-color: white;}#sk-container-id-5 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-5 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-5 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-5 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-5 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-5 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-5 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-5 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-5 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-5 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-5 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-5 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-5 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-5 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-5 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-5 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-5 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-5 div.sk-item {position: relative;z-index: 1;}#sk-container-id-5 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-5 div.sk-item::before, #sk-container-id-5 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-5 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-5 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-5 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-5 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-5 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-5 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-5 div.sk-label-container {text-align: center;}#sk-container-id-5 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-5 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-5" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('preprocessor',ColumnTransformer(transformers=[('num',Pipeline(steps=[('imputer',SimpleImputer(strategy='median')),('std_scaler',StandardScaler())]),['MonthlyCharges','TotalCharges', 'tenure']),('cat',OneHotEncoder(handle_unknown='ignore'),['SeniorCitizen', 'gender','Partner', 'Dependents','PhoneService','MultipleLines','InternetService','OnlineSecurity','OnlineBackup','DeviceProtection','TechSupport', 'StreamingTV','StreamingMovies','Contract','PaperlessBilling','PaymentMethod'])])),('classifier',LogisticRegression(class_weight='balanced', max_iter=300))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-26" type="checkbox" ><label for="sk-estimator-id-26" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('preprocessor',ColumnTransformer(transformers=[('num',Pipeline(steps=[('imputer',SimpleImputer(strategy='median')),('std_scaler',StandardScaler())]),['MonthlyCharges','TotalCharges', 'tenure']),('cat',OneHotEncoder(handle_unknown='ignore'),['SeniorCitizen', 'gender','Partner', 'Dependents','PhoneService','MultipleLines','InternetService','OnlineSecurity','OnlineBackup','DeviceProtection','TechSupport', 'StreamingTV','StreamingMovies','Contract','PaperlessBilling','PaymentMethod'])])),('classifier',LogisticRegression(class_weight='balanced', max_iter=300))])</pre></div></div></div><div class="sk-serial"><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-27" type="checkbox" ><label for="sk-estimator-id-27" class="sk-toggleable__label sk-toggleable__label-arrow">preprocessor: ColumnTransformer</label><div class="sk-toggleable__content"><pre>ColumnTransformer(transformers=[('num',Pipeline(steps=[('imputer',SimpleImputer(strategy='median')),('std_scaler',StandardScaler())]),['MonthlyCharges', 'TotalCharges', 'tenure']),('cat', OneHotEncoder(handle_unknown='ignore'),['SeniorCitizen', 'gender', 'Partner','Dependents', 'PhoneService', 'MultipleLines','InternetService', 'OnlineSecurity','OnlineBackup', 'DeviceProtection','TechSupport', 'StreamingTV','StreamingMovies', 'Contract','PaperlessBilling', 'PaymentMethod'])])</pre></div></div></div><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-28" type="checkbox" ><label for="sk-estimator-id-28" class="sk-toggleable__label sk-toggleable__label-arrow">num</label><div class="sk-toggleable__content"><pre>['MonthlyCharges', 'TotalCharges', 'tenure']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-29" type="checkbox" ><label for="sk-estimator-id-29" class="sk-toggleable__label sk-toggleable__label-arrow">SimpleImputer</label><div class="sk-toggleable__content"><pre>SimpleImputer(strategy='median')</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-30" type="checkbox" ><label for="sk-estimator-id-30" class="sk-toggleable__label sk-toggleable__label-arrow">StandardScaler</label><div class="sk-toggleable__content"><pre>StandardScaler()</pre></div></div></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-31" type="checkbox" ><label for="sk-estimator-id-31" class="sk-toggleable__label sk-toggleable__label-arrow">cat</label><div class="sk-toggleable__content"><pre>['SeniorCitizen', 'gender', 'Partner', 'Dependents', 'PhoneService', 'MultipleLines', 'InternetService', 'OnlineSecurity', 'OnlineBackup', 'DeviceProtection', 'TechSupport', 'StreamingTV', 'StreamingMovies', 'Contract', 'PaperlessBilling', 'PaymentMethod']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-32" type="checkbox" ><label for="sk-estimator-id-32" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder(handle_unknown='ignore')</pre></div></div></div></div></div></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-33" type="checkbox" ><label for="sk-estimator-id-33" class="sk-toggleable__label sk-toggleable__label-arrow">LogisticRegression</label><div class="sk-toggleable__content"><pre>LogisticRegression(class_weight='balanced', max_iter=300)</pre></div></div></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|----------|----------|
| accuracy | 0.730305 |
| f1 score | 0.730305 |
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
import pickle
with open(dtc_pkl_filename, 'rb') as file:
clf = pickle.load(file)
```
</details>
# Model Card Authors
This model card is written by following authors:
skops_user
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
bibtex
@inproceedings{...,year={2020}}
```
# Additional Content
## confusion_matrix

|
Pattkopp/distilbert-base-uncased-finetuned-emotion
|
Pattkopp
| 2022-11-04T20:16:13Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-04T19:59:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9175
- name: F1
type: f1
value: 0.917868093658934
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2301
- Accuracy: 0.9175
- F1: 0.9179
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8386 | 1.0 | 250 | 0.3275 | 0.904 | 0.9011 |
| 0.2572 | 2.0 | 500 | 0.2301 | 0.9175 | 0.9179 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
platzi/platzi-distilroberta-base-glue-mrpc-eduardo-ag
|
platzi
| 2022-11-04T19:49:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-04T19:25:03Z |
---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: platzi-distilroberta-base-glue-mrpc-eduardo-ag
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: train
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8186274509803921
- name: F1
type: f1
value: 0.8634686346863469
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-glue-mrpc-eduardo-ag
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets.
It achieves the following results on the evaluation set:
- Loss: 0.6614
- Accuracy: 0.8186
- F1: 0.8635
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5185 | 1.09 | 500 | 0.4796 | 0.8431 | 0.8889 |
| 0.3449 | 2.18 | 1000 | 0.6614 | 0.8186 | 0.8635 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
cambridgeltl/sst_mobilebert-uncased
|
cambridgeltl
| 2022-11-04T19:20:23Z | 11 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mobilebert",
"text-classification",
"arxiv:2004.02984",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-14T14:35:36Z |
This model provides a MobileBERT [(Sun et al., 2020)](https://arxiv.org/abs/2004.02984) fine-tuned on the SST data with three sentiments (0 -- negative, 1 -- neutral, and 2 -- positive).
## Example Usage
Below, we provide illustrations on how to use this model to make sentiment predictions.
```python
import torch
from transformers import AutoTokenizer, AutoConfig, MobileBertForSequenceClassification
# load model
model_name = r'cambridgeltl/sst_mobilebert-uncased'
tokenizer = AutoTokenizer.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name)
model = MobileBertForSequenceClassification.from_pretrained(model_name, config=config)
model.eval()
'''
labels:
0 -- negative
1 -- neutral
2 -- positive
'''
# prepare exemplar sentences
batch_sentences = [
"in his first stab at the form , jacquot takes a slightly anarchic approach that works only sporadically .",
"a valueless kiddie paean to pro basketball underwritten by the nba .",
"a very well-made , funny and entertaining picture .",
]
# prepare input
inputs = tokenizer(batch_sentences, max_length=256, truncation=True, padding=True, return_tensors='pt')
input_ids, attention_mask = inputs.input_ids, inputs.attention_mask
# make predictions
outputs = model(input_ids=input_ids, attention_mask=attention_mask)
predictions = torch.argmax(outputs.logits, dim = -1)
print (predictions)
# tensor([1, 0, 2])
```
## Citation:
If you find this model useful, please kindly cite our model as
```bibtex
@misc{susstmobilebert,
author = {Su, Yixuan},
title = {A MobileBERT Fine-tuned on SST},
howpublished = {\url{https://huggingface.co/cambridgeltl/sst_mobilebert-uncased}},
year = 2022
}
```
|
Madhyam123/Madhyam
|
Madhyam123
| 2022-11-04T19:20:21Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2022-11-04T19:20:21Z |
---
license: bigscience-openrail-m
---
|
spoiled/roberta-large-neg-tags
|
spoiled
| 2022-11-04T18:49:35Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-04T18:05:23Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-large-neg-tags
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-neg-tags
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0016
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| 0.0143 | 1.0 | 938 | 0.0032 | 0.0 | 0.0 | 0.0 | 0.9995 |
| 0.0033 | 2.0 | 1876 | 0.0017 | 0.0 | 0.0 | 0.0 | 0.9996 |
| 0.0039 | 3.0 | 2814 | 0.0018 | 0.0 | 0.0 | 0.0 | 0.9997 |
| 0.0012 | 4.0 | 3752 | 0.0016 | 0.0 | 0.0 | 0.0 | 0.9997 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.10.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/itsbludood
|
huggingtweets
| 2022-11-04T18:36:50Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-04T18:36:15Z |
---
language: en
thumbnail: http://www.huggingtweets.com/itsbludood/1667587006494/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1543744611742584834/Y_8SQZ8s_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">BluDood</div>
<div style="text-align: center; font-size: 14px;">@itsbludood</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from BluDood.
| Data | BluDood |
| --- | --- |
| Tweets downloaded | 579 |
| Retweets | 126 |
| Short tweets | 62 |
| Tweets kept | 391 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/wux94qs4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @itsbludood's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/w2ic8dfp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/w2ic8dfp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/itsbludood')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
trtez/Trtez.com
|
trtez
| 2022-11-04T18:36:18Z | 0 | 1 | null |
[
"region:us"
] | null | 2022-11-04T18:34:47Z |
Trtez.com
Tez yazdırma
Tez hazırlama
Yüksek lisans tez yazdırma
|
SirVeggie/greg_rutkowski
|
SirVeggie
| 2022-11-04T18:00:04Z | 0 | 4 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-04T17:33:58Z |
---
license: creativeml-openrail-m
---
# Grzegorz Rutkowski stable diffusion model
Original artist: Grzegorz Rutkowski\
Artstation: https://www.artstation.com/rutkowski
## Basic explanation
Token and Class words are what guide the AI to produce images similar to the trained style/object/character.
Include any mix of these words in the prompt to produce verying results, or exclude them to have a less pronounced effect.
There is usually at least a slight stylistic effect even without the words, but it is recommended to include at least one.
Adding token word/phrase class word/phrase at the start of the prompt in that order produces results most similar to the trained concept, but they can be included elsewhere as well. Some models produce better results when not including all token/class words.
## Model info
model: greg\
token: m_greg\
class: illustration style\
base: waifu diffusion 1.3-full\
images: 36\
steps: 3600
|
svo2/roberta-finetuned-state
|
svo2
| 2022-11-04T17:24:30Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-03T19:41:47Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-state
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-state
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
svo2/roberta-finetuned-city
|
svo2
| 2022-11-04T16:28:31Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-31T17:28:30Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-city
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-city
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
troesy/distilBERT-fresh_10epoch
|
troesy
| 2022-11-04T15:57:02Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-04T15:45:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilBERT-fresh_10epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT-fresh_10epoch
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0234
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 174 | 0.1913 | 0.0 | 0.0 | 0.0 | 0.9312 |
| No log | 2.0 | 348 | 0.1431 | 0.0 | 0.0 | 0.0 | 0.9507 |
| 0.2211 | 3.0 | 522 | 0.1053 | 0.0 | 0.0 | 0.0 | 0.9640 |
| 0.2211 | 4.0 | 696 | 0.0770 | 0.0 | 0.0 | 0.0 | 0.9746 |
| 0.2211 | 5.0 | 870 | 0.0581 | 0.0 | 0.0 | 0.0 | 0.9820 |
| 0.0995 | 6.0 | 1044 | 0.0461 | 0.0 | 0.0 | 0.0 | 0.9862 |
| 0.0995 | 7.0 | 1218 | 0.0376 | 0.0 | 0.0 | 0.0 | 0.9886 |
| 0.0995 | 8.0 | 1392 | 0.0290 | 0.0 | 0.0 | 0.0 | 0.9915 |
| 0.054 | 9.0 | 1566 | 0.0238 | 0.0 | 0.0 | 0.0 | 0.9934 |
| 0.054 | 10.0 | 1740 | 0.0234 | 0.0 | 0.0 | 0.0 | 0.9935 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
arjunchandra/ddpm-butterflies-128
|
arjunchandra
| 2022-11-04T15:14:03Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-04T13:58:06Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/arjunchandra/ddpm-butterflies-128/tensorboard?#scalars)
|
NikitaShu/testPyramids
|
NikitaShu
| 2022-11-04T14:35:57Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-11-04T14:35:49Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: NikitaShu/testPyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
RaulFD-creator/BrigitCNN
|
RaulFD-creator
| 2022-11-04T14:29:16Z | 0 | 0 | null |
[
"license:bsd-3-clause",
"region:us"
] | null | 2022-11-04T14:26:01Z |
---
license: bsd-3-clause
---
BrigitCNN: CNN model trained for detecting protein-metal binding regions.
|
sd-concepts-library/happy-chaos
|
sd-concepts-library
| 2022-11-04T13:55:04Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-11-04T13:54:52Z |
---
license: mit
---
### Happy Chaos on Stable Diffusion
This is the `<happychaos>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
gogzy/t5-base-finetuned_renre_2021_70_item1
|
gogzy
| 2022-11-04T13:44:29Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-04T13:40:38Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: gogzy/t5-base-finetuned_renre_2021_70_item1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gogzy/t5-base-finetuned_renre_2021_70_item1
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.9249
- Validation Loss: 3.4095
- Train Rouge1: 23.3982
- Train Rouge2: 19.6757
- Train Rougel: 22.3564
- Train Rougelsum: 22.8412
- Train Gen Len: 19.0
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 9.7749 | 6.6798 | 18.9434 | 12.6370 | 16.9890 | 17.7338 | 19.0 | 0 |
| 4.9973 | 4.2477 | 22.6855 | 17.2847 | 21.5463 | 21.7509 | 19.0 | 1 |
| 3.5151 | 3.8275 | 23.5077 | 18.3312 | 21.6536 | 21.9844 | 19.0 | 2 |
| 3.2552 | 3.5650 | 22.6213 | 18.1468 | 21.3466 | 21.8323 | 19.0 | 3 |
| 2.9249 | 3.4095 | 23.3982 | 19.6757 | 22.3564 | 22.8412 | 19.0 | 4 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.1
|
kueltzho/ddpm-butterflies-128
|
kueltzho
| 2022-11-04T13:09:09Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-04T12:21:04Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/kueltzho/ddpm-butterflies-128/tensorboard?#scalars)
|
sirui/bert-base-chinese-finetuned-car_corpus
|
sirui
| 2022-11-04T12:45:04Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-04T07:08:41Z |
---
tags:
- generated_from_trainer
model-index:
- name: bert-base-chinese-finetuned-car_corpus
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-car_corpus
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the Car Corpus Database.
It achieves the following results on the evaluation set:
- Loss: 1.5187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.799 | 1.0 | 3776 | 1.5830 |
| 0.7419 | 2.0 | 7552 | 1.4930 |
| 0.7245 | 3.0 | 11328 | 1.5187 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
troesy/distilBERT-fresh
|
troesy
| 2022-11-04T10:30:15Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-04T10:19:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilBERT-fresh
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT-fresh
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1444
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 174 | 0.1957 | 0.0 | 0.0 | 0.0 | 0.9289 |
| No log | 2.0 | 348 | 0.1591 | 0.0 | 0.0 | 0.0 | 0.9438 |
| 0.2272 | 3.0 | 522 | 0.1444 | 0.0 | 0.0 | 0.0 | 0.9489 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
pe65374/xcoa-sbert-base-chinese-nli
|
pe65374
| 2022-11-04T09:29:06Z | 6 | 3 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"zh",
"arxiv:1909.05658",
"arxiv:1908.10084",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-04T09:00:41Z |
---
language: zh
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
license: apache-2.0
widget:
source_sentence: "那个人很开心"
sentences:
- 那个人非常开心
- 那只猫很开心
- 那个人在吃东西
---
# Chinese Sentence BERT
## Model description
This is the sentence embedding model pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
for easy testing and solving the warning from sentences-transformers (initialized by which), I forked the original repo.
## Training data
[ChineseTextualInference](https://github.com/liuhuanyong/ChineseTextualInference/) is used as training data.
## Training procedure
The model is fine-tuned by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We fine-tune five epochs with a sequence length of 128 on the basis of the pre-trained model [chinese_roberta_L-12_H-768](https://huggingface.co/uer/chinese_roberta_L-12_H-768). At the end of each epoch, the model is saved when the best performance on development set is achieved.
```
python3 finetune/run_classifier_siamese.py --pretrained_model_path models/cluecorpussmall_roberta_base_seq512_model.bin-250000 \
--vocab_path models/google_zh_vocab.txt \
--config_path models/sbert/base_config.json \
--train_path datasets/ChineseTextualInference/train.tsv \
--dev_path datasets/ChineseTextualInference/dev.tsv \
--learning_rate 5e-5 --epochs_num 5 --batch_size 64
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_sbert_from_uer_to_huggingface.py --input_model_path models/finetuned_model.bin \
--output_model_path pytorch_model.bin \
--layers_num 12
```
### BibTeX entry and citation info
```
@article{reimers2019sentence,
title={Sentence-bert: Sentence embeddings using siamese bert-networks},
author={Reimers, Nils and Gurevych, Iryna},
journal={arXiv preprint arXiv:1908.10084},
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
|
neerajp/en_core_web_lg
|
neerajp
| 2022-11-04T08:42:19Z | 7 | 1 |
spacy
|
[
"spacy",
"token-classification",
"en",
"license:mit",
"model-index",
"region:us"
] |
token-classification
| 2022-11-04T08:35:24Z |
---
tags:
- spacy
- token-classification
language:
- en
license: mit
model-index:
- name: en_core_web_lg
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8535469108
- name: NER Recall
type: recall
value: 0.8592748397
- name: NER F Score
type: f_score
value: 0.8564012977
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9734404547
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.9204363007
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.9023174614
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.90444794
---
### Details: https://spacy.io/models/en#en_core_web_lg
English pipeline optimized for CPU. Components: tok2vec, tagger, parser, senter, ner, attribute_ruler, lemmatizer.
| Feature | Description |
| --- | --- |
| **Name** | `en_core_web_lg` |
| **Version** | `3.4.1` |
| **spaCy** | `>=3.4.0,<3.5.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` |
| **Components** | `tok2vec`, `tagger`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner` |
| **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) |
| **Sources** | [OntoNotes 5](https://catalog.ldc.upenn.edu/LDC2013T19) (Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, Ann Houston)<br />[ClearNLP Constituent-to-Dependency Conversion](https://github.com/clir/clearnlp-guidelines/blob/master/md/components/dependency_conversion.md) (Emory University)<br />[WordNet 3.0](https://wordnet.princeton.edu/) (Princeton University)<br />[Explosion Vectors (OSCAR 2109 + Wikipedia + OpenSubtitles + WMT News Crawl)](https://github.com/explosion/spacy-vectors-builder) (Explosion) |
| **License** | `MIT` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (113 labels for 3 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, `_SP`, ```` |
| **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `agent`, `amod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `csubj`, `csubjpass`, `dative`, `dep`, `det`, `dobj`, `expl`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nsubj`, `nsubjpass`, `nummod`, `oprd`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` |
| **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 99.93 |
| `TOKEN_P` | 99.57 |
| `TOKEN_R` | 99.58 |
| `TOKEN_F` | 99.57 |
| `TAG_ACC` | 97.34 |
| `SENTS_P` | 91.79 |
| `SENTS_R` | 89.14 |
| `SENTS_F` | 90.44 |
| `DEP_UAS` | 92.04 |
| `DEP_LAS` | 90.23 |
| `ENTS_P` | 85.35 |
| `ENTS_R` | 85.93 |
| `ENTS_F` | 85.64 |
|
furyhawk/finetuning-sentiment-model-3000-samples
|
furyhawk
| 2022-11-04T08:30:40Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-01T13:11:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.91
- name: F1
type: f1
value: 0.9117158742287056
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2316
- Accuracy: 0.91
- F1: 0.9117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.1
|
MarkGG/Romance-cleaned-2
|
MarkGG
| 2022-11-04T05:52:31Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-28T07:54:27Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Romance-cleaned-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Romance-cleaned-2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.0319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.96 | 16 | 10.3553 |
| No log | 1.96 | 32 | 9.5625 |
| No log | 2.96 | 48 | 9.0898 |
| No log | 3.96 | 64 | 8.7852 |
| No log | 4.96 | 80 | 8.4694 |
| No log | 5.96 | 96 | 8.2122 |
| No log | 6.96 | 112 | 8.0040 |
| No log | 7.96 | 128 | 7.8029 |
| No log | 8.96 | 144 | 7.5950 |
| No log | 9.96 | 160 | 7.4081 |
| No log | 10.96 | 176 | 7.2391 |
| No log | 11.96 | 192 | 7.0784 |
| No log | 12.96 | 208 | 6.9139 |
| No log | 13.96 | 224 | 6.7530 |
| No log | 14.96 | 240 | 6.5983 |
| No log | 15.96 | 256 | 6.4403 |
| No log | 16.96 | 272 | 6.3025 |
| No log | 17.96 | 288 | 6.1562 |
| No log | 18.96 | 304 | 6.0147 |
| No log | 19.96 | 320 | 5.8919 |
| No log | 20.96 | 336 | 5.7709 |
| No log | 21.96 | 352 | 5.6666 |
| No log | 22.96 | 368 | 5.5818 |
| No log | 23.96 | 384 | 5.5051 |
| No log | 24.96 | 400 | 5.4356 |
| No log | 25.96 | 416 | 5.3788 |
| No log | 26.96 | 432 | 5.3230 |
| No log | 27.96 | 448 | 5.2823 |
| No log | 28.96 | 464 | 5.2513 |
| No log | 29.96 | 480 | 5.2218 |
| No log | 30.96 | 496 | 5.1910 |
| No log | 31.96 | 512 | 5.1609 |
| No log | 32.96 | 528 | 5.1500 |
| No log | 33.96 | 544 | 5.1268 |
| No log | 34.96 | 560 | 5.1012 |
| No log | 35.96 | 576 | 5.0973 |
| No log | 36.96 | 592 | 5.0769 |
| No log | 37.96 | 608 | 5.0653 |
| No log | 38.96 | 624 | 5.0489 |
| No log | 39.96 | 640 | 5.0458 |
| No log | 40.96 | 656 | 5.0379 |
| No log | 41.96 | 672 | 5.0347 |
| No log | 42.96 | 688 | 5.0161 |
| No log | 43.96 | 704 | 5.0226 |
| No log | 44.96 | 720 | 5.0215 |
| No log | 45.96 | 736 | 5.0190 |
| No log | 46.96 | 752 | 5.0087 |
| No log | 47.96 | 768 | 5.0309 |
| No log | 48.96 | 784 | 5.0232 |
| No log | 49.96 | 800 | 5.0319 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
lvkaokao/bert-base-uncased-teacher-preparation-pretrain
|
lvkaokao
| 2022-11-04T02:50:34Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2022-09-27T06:13:02Z |
---
license: other
---
```python
#!/bin/bash
# Apache v2 license
# Copyright (C) 2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# Teacher Preparation
# Notes:
# Auto mixed precision can be used by adding --fp16
# Distributed training can be used with the torch.distributed.lauch app
TEACHER_PATH=./bert-base-uncased-teacher-preparation-pretrain
OUTPUT_DIR=$TEACHER_PATH
DATA_CACHE_DIR=/root/kaokao/Model-Compression-Research-Package/examples/transformers/language-modeling/wikipedia_processed_for_pretrain
python -m torch.distributed.launch \
--nproc_per_node=8 \
../../examples/transformers/language-modeling/run_mlm.py \
--model_name_or_path bert-base-uncased \
--datasets_name_config wikipedia:20200501.en \
--data_process_type segment_pair_nsp \
--dataset_cache_dir $DATA_CACHE_DIR \
--do_train \
--learning_rate 5e-5 \
--max_steps 100000 \
--warmup_ratio 0.01 \
--weight_decay 0.01 \
--per_device_train_batch_size 8 \
--gradient_accumulation_steps 4 \
--logging_steps 10 \
--save_steps 5000 \
--save_total_limit 2 \
--output_dir $OUTPUT_DIR \
--run_name pofa-teacher-prepare-pretrain
```
|
0xkrm/q-FrozenLake-v1-4x4-noSlippery
|
0xkrm
| 2022-11-04T02:37:18Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-04T02:34:06Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="0xkrm/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
fake4325634/chkn
|
fake4325634
| 2022-11-04T02:18:20Z | 0 | 2 | null |
[
"license:mit",
"region:us"
] | null | 2022-11-03T23:31:04Z |
---
license: mit
---
Trained on amateur photographs of chickens from Reddit. Include "chkn" in a prompt to use.






|
g30rv17ys/customdbmodelv6
|
g30rv17ys
| 2022-11-04T01:35:01Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:geevegeorge/customdbv6",
"license:apache-2.0",
"diffusers:AudioDiffusionPipeline",
"region:us"
] | null | 2022-11-03T20:19:12Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: geevegeorge/customdbv6
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# customdbmodelv6
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `geevegeorge/customdbv6` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- gradient_accumulation_steps: 8
- optimizer: AdamW with betas=(0.95, 0.999), weight_decay=1e-06 and epsilon=1e-08
- lr_scheduler: cosine
- lr_warmup_steps: 500
- ema_inv_gamma: 1.0
- ema_inv_gamma: 0.75
- ema_inv_gamma: 0.9999
- mixed_precision: no
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/customdbmodelv6/tensorboard?#scalars)
|
huggingtweets/pastapixels
|
huggingtweets
| 2022-11-04T00:14:17Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-04T00:10:59Z |
---
language: en
thumbnail: http://www.huggingtweets.com/pastapixels/1667520823262/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1406399825969659907/ghOhzavP_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jon</div>
<div style="text-align: center; font-size: 14px;">@pastapixels</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jon.
| Data | Jon |
| --- | --- |
| Tweets downloaded | 593 |
| Retweets | 23 |
| Short tweets | 301 |
| Tweets kept | 269 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2p6blook/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @pastapixels's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/10iyzbm8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/10iyzbm8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/pastapixels')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
kartikpalani/eai-setfit-model2
|
kartikpalani
| 2022-11-03T23:03:58Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-03T23:03:52Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3184 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 3184,
"warmup_steps": 319,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
huggingtweets/akamos_33
|
huggingtweets
| 2022-11-03T20:55:53Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-03T20:55:13Z |
---
language: en
thumbnail: http://www.huggingtweets.com/akamos_33/1667508949674/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1580799021593100288/p6DXveVh_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">BIG AKAMO</div>
<div style="text-align: center; font-size: 14px;">@akamos_33</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from BIG AKAMO.
| Data | BIG AKAMO |
| --- | --- |
| Tweets downloaded | 705 |
| Retweets | 111 |
| Short tweets | 176 |
| Tweets kept | 418 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2laa93tf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @akamos_33's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/65lj4i53) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/65lj4i53/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/akamos_33')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sadanyh/Arabic-Dialect-Translation
|
sadanyh
| 2022-11-03T20:51:57Z | 0 | 0 | null |
[
"translation",
"ar",
"en",
"dataset:twitter",
"license:apache-2.0",
"region:us"
] |
translation
| 2022-11-03T15:50:28Z |
---
language:
- ar
- en
tags:
- translation
license: apache-2.0
datasets:
- twitter
metrics:
- bleu
- sacrebleu
---
|
sd-concepts-library/angus-mcbride-style
|
sd-concepts-library
| 2022-11-03T20:38:25Z | 0 | 7 | null |
[
"license:mit",
"region:us"
] | null | 2022-11-03T20:38:20Z |
---
license: mit
---
### angus mcbride style on Stable Diffusion
This is the `<angus-mcbride-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:


















































|
drandran/asmonbald
|
drandran
| 2022-11-03T20:37:46Z | 0 | 4 | null |
[
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:unknown",
"region:us"
] |
text-to-image
| 2022-11-03T20:22:33Z |
---
license: unknown
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
---
# Asmongold model.ckpt for Stable Diffusion v1-5 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. I've trained using Dreambooth 20 images of twitch streamer Asmongold for the purpose of text-to-image illustration generation using Stable Diffusion.
Feel free to download, use and share the model as you like. To give the Ai the trigger to generate an illustration based on the trained Asmongold images, make sure to use the tag "asmonbald" in your prompts.
Example:
a detailed portrait photo of a man
vs
a detailed portrait photo of asmonbald
---
|
pnichite/en_pipeline_123
|
pnichite
| 2022-11-03T19:57:26Z | 5 | 0 |
spacy
|
[
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] |
token-classification
| 2022-11-03T19:57:04Z |
---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_pipeline_123
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.7661674609
- name: NER Recall
type: recall
value: 0.8052226793
- name: NER F Score
type: f_score
value: 0.7852097323
---
| Feature | Description |
| --- | --- |
| **Name** | `en_pipeline_123` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.4.1,<3.5.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (2 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `DESCRIPTION`, `TITLE` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 78.52 |
| `ENTS_P` | 76.62 |
| `ENTS_R` | 80.52 |
| `TRANSFORMER_LOSS` | 1811559.14 |
| `NER_LOSS` | 6345113.13 |
|
TTian/bert-classifier-feedback-qa
|
TTian
| 2022-11-03T19:29:33Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-03T19:14:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-classifier-feedback-qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-classifier-feedback-qa
This model is a fine-tuned version of [TTian/bert-mlm-feedback](https://huggingface.co/TTian/bert-mlm-feedback) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/deltazulu14
|
huggingtweets
| 2022-11-03T18:48:19Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-03T18:46:16Z |
---
language: en
thumbnail: http://www.huggingtweets.com/deltazulu14/1667501296205/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1569374676933033984/NSveEXrv_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Delta Zulu</div>
<div style="text-align: center; font-size: 14px;">@deltazulu14</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Delta Zulu.
| Data | Delta Zulu |
| --- | --- |
| Tweets downloaded | 881 |
| Retweets | 108 |
| Short tweets | 150 |
| Tweets kept | 623 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/8h87mrlb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deltazulu14's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/mwjzatl4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/mwjzatl4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deltazulu14')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
santiagoahl/vit_model
|
santiagoahl
| 2022-11-03T18:20:04Z | 29 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-03T17:40:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- beans
model-index:
- name: vit_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
rexwang8/qilin-lit-6b
|
rexwang8
| 2022-11-03T16:58:09Z | 30 | 6 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"text generation",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-23T02:10:01Z |
---
language: en
thumbnail: "https://i.ibb.co/HBqvBFY/mountain-xianxia-chinese-scenic-landscape-craggy-mist-action-scene-pagoda-s-2336925014-1.png"
tags:
- text generation
- pytorch
license: mit
---
# Qilin-lit-6b Description
Most updated version is V1.1.0 which is fine-tuned on 550 MB of webnovels found on the NovelUpdates website. (https://www.novelupdates.com/)
The style is SFW and whimsical, excelling at telling fantasy stories, especially webnovels.
## Downstream Uses
This model can be used for entertainment purposes and as a creative writing assistant for fiction writers.
## Usage with Kobold AI Colab (Easiest)
GPU -> https://colab.research.google.com/github/KoboldAI/KoboldAI-Client/blob/main/colab/GPU.ipynb
TPU -> https://colab.research.google.com/github/KoboldAI/KoboldAI-Client/blob/main/colab/TPU.ipynb
Replace the drop-down value with "rexwang8/qilin-lit-6b" and select that model.
## Usage with Kobold AI Local
Load at AI/load a model from it's directory. Model name is "rexwang8/qilin-lit-6b". If you get a config.json not found error, reload the program and give it some time to find your GPUs.
## Example Code
```
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained('rexwang8/qilin-lit-6b')
tokenizer = AutoTokenizer.from_pretrained('rexwang8/lit-6b')
prompt = '''I had eyes but couldn't see Mount Tai!'''
input_ids = tokenizer.encode(prompt, return_tensors='pt')
output = model.generate(input_ids, do_sample=True, temperature=1.0, top_p=0.9, repetition_penalty=1.2, max_length=len(input_ids[0])+100, pad_token_id=tokenizer.eos_token_id)
generated_text = tokenizer.decode(output[0])
print(generated_text)
```
---
## Qilin-lit-6b (V1.1.0)
Fine-tuned version of EleutherAI/gpt-j-6B (https://huggingface.co/EleutherAI/gpt-j-6B) on Coreweave's infrastructure (<https://www.coreweave.com/>) using an A40 over ~80 hours.
3150 steps, 1 epoch trained on 550 MB of primarily Xianxia genre Webnovels. (Translated to English)
---
## Team members and Acknowledgements
Rex Wang - Author
Coreweave - Computational materials
With help from:
Wes Brown, Anthony Mercurio
---
## Version History
1.1.0 - 550 MB Dataset(34 books) 3150 steps (no reordering, no sampling)
1.0.0 - 100 MB Dataset(3 books) 300 steps (no reordering, no sampling)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.