modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-27 18:28:06
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 523
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-27 18:27:40
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
stevaras2/ppo-Pyramids
|
stevaras2
| 2023-01-26T13:46:04Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-01-26T13:37:52Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: stevaras2/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Heerak/xlm-roberta-base-finetuned-panx-en
|
Heerak
| 2023-01-26T13:44:18Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-01-26T13:06:33Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.7047619047619047
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3850
- F1: 0.7048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 50 | 0.5487 | 0.5693 |
| 0.775 | 2.0 | 100 | 0.4213 | 0.6837 |
| 0.775 | 3.0 | 150 | 0.3850 | 0.7048 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
iblub/SnowballTarget1
|
iblub
| 2023-01-26T13:41:54Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-01-26T13:41:48Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: iblub/SnowballTarget1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
KoboldAI/OPT-30B-Erebus
|
KoboldAI
| 2023-01-26T13:24:11Z | 1,540 | 63 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"en",
"arxiv:2205.01068",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-01-21T08:06:38Z |
---
language: en
license: other
commercial: no
inference: false
---
# OPT 30B - Erebus
## Model description
This is the second generation of the original Shinen made by Mr. Seeker. The full dataset consists of 6 different sources, all surrounding the "Adult" theme. The name "Erebus" comes from the greek mythology, also named "darkness". This is in line with Shin'en, or "deep abyss". For inquiries, please contact the KoboldAI community. **Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
## Training data
The data can be divided in 6 different datasets:
- Literotica (everything with 4.5/5 or higher)
- Sexstories (everything with 90 or higher)
- Dataset-G (private dataset of X-rated stories)
- Doc's Lab (all stories)
- Pike Dataset (novels with "adult" rating)
- SoFurry (collection of various animals)
The dataset uses `[Genre: <comma-separated list of genres>]` for tagging.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='KoboldAI/OPT-30B-Erebus')
>>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50)
[{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}]
```
## Limitations and biases
Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion). **Warning: This model has a very strong NSFW bias!**
### License
OPT-30B is licensed under the OPT-175B license, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
### BibTeX entry and citation info
```
@misc{zhang2022opt,
title={OPT: Open Pre-trained Transformer Language Models},
author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer},
year={2022},
eprint={2205.01068},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
abkbvknv/bert-finetuned-ner
|
abkbvknv
| 2023-01-26T13:19:13Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-01-26T13:12:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
ludigija/Ludigija_project
|
ludigija
| 2023-01-26T13:15:53Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-01-26T13:15:53Z |
---
license: bigscience-openrail-m
---
|
gokuls/distilbert_add_GLUE_Experiment_qnli
|
gokuls
| 2023-01-26T13:09:15Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-26T12:47:01Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert_add_GLUE_Experiment_qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.6066263957532492
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_add_GLUE_Experiment_qnli
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6648
- Accuracy: 0.6066
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6886 | 1.0 | 410 | 0.6648 | 0.6066 |
| 0.6569 | 2.0 | 820 | 0.6677 | 0.5999 |
| 0.6419 | 3.0 | 1230 | 0.6672 | 0.5914 |
| 0.6293 | 4.0 | 1640 | 0.6677 | 0.5977 |
| 0.6118 | 5.0 | 2050 | 0.6691 | 0.6002 |
| 0.5857 | 6.0 | 2460 | 0.6854 | 0.6077 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
gokuls/distilbert_add_GLUE_Experiment_qnli_256
|
gokuls
| 2023-01-26T12:48:23Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-26T12:35:41Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert_add_GLUE_Experiment_qnli_256
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5905180303862346
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_add_GLUE_Experiment_qnli_256
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6656
- Accuracy: 0.5905
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6936 | 1.0 | 410 | 0.6893 | 0.5654 |
| 0.6702 | 2.0 | 820 | 0.6656 | 0.5905 |
| 0.6477 | 3.0 | 1230 | 0.6665 | 0.5966 |
| 0.6369 | 4.0 | 1640 | 0.6665 | 0.5953 |
| 0.627 | 5.0 | 2050 | 0.6724 | 0.5934 |
| 0.6173 | 6.0 | 2460 | 0.6842 | 0.5920 |
| 0.6083 | 7.0 | 2870 | 0.7093 | 0.5810 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
gokuls/mobilebert_add_GLUE_Experiment_mrpc_256
|
gokuls
| 2023-01-26T12:47:32Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-26T12:41:44Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: mobilebert_add_GLUE_Experiment_mrpc_256
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.6838235294117647
- name: F1
type: f1
value: 0.8122270742358079
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_add_GLUE_Experiment_mrpc_256
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6207
- Accuracy: 0.6838
- F1: 0.8122
- Combined Score: 0.7480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6419 | 1.0 | 29 | 0.6266 | 0.6838 | 0.8122 | 0.7480 |
| 0.6297 | 2.0 | 58 | 0.6236 | 0.6838 | 0.8122 | 0.7480 |
| 0.6307 | 3.0 | 87 | 0.6241 | 0.6838 | 0.8122 | 0.7480 |
| 0.63 | 4.0 | 116 | 0.6243 | 0.6838 | 0.8122 | 0.7480 |
| 0.6283 | 5.0 | 145 | 0.6219 | 0.6838 | 0.8122 | 0.7480 |
| 0.6243 | 6.0 | 174 | 0.6207 | 0.6838 | 0.8122 | 0.7480 |
| 0.6206 | 7.0 | 203 | 0.6346 | 0.6838 | 0.8122 | 0.7480 |
| 0.6034 | 8.0 | 232 | 0.6519 | 0.6348 | 0.7545 | 0.6947 |
| 0.5877 | 9.0 | 261 | 0.6375 | 0.6838 | 0.8122 | 0.7480 |
| 0.5722 | 10.0 | 290 | 0.6446 | 0.6299 | 0.7504 | 0.6902 |
| 0.5619 | 11.0 | 319 | 0.6733 | 0.6814 | 0.8105 | 0.7459 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
gokuls/distilbert_add_GLUE_Experiment_mrpc
|
gokuls
| 2023-01-26T12:46:09Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-26T12:41:46Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: distilbert_add_GLUE_Experiment_mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.696078431372549
- name: F1
type: f1
value: 0.8171091445427728
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_add_GLUE_Experiment_mrpc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6028
- Accuracy: 0.6961
- F1: 0.8171
- Combined Score: 0.7566
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6617 | 1.0 | 15 | 0.6507 | 0.6838 | 0.8122 | 0.7480 |
| 0.6412 | 2.0 | 30 | 0.6290 | 0.6838 | 0.8122 | 0.7480 |
| 0.6315 | 3.0 | 45 | 0.6252 | 0.6838 | 0.8122 | 0.7480 |
| 0.6319 | 4.0 | 60 | 0.6236 | 0.6838 | 0.8122 | 0.7480 |
| 0.6321 | 5.0 | 75 | 0.6225 | 0.6838 | 0.8122 | 0.7480 |
| 0.616 | 6.0 | 90 | 0.6028 | 0.6961 | 0.8171 | 0.7566 |
| 0.5469 | 7.0 | 105 | 0.6485 | 0.6446 | 0.7349 | 0.6898 |
| 0.4436 | 8.0 | 120 | 0.7536 | 0.6838 | 0.7909 | 0.7374 |
| 0.3794 | 9.0 | 135 | 0.7805 | 0.6961 | 0.7898 | 0.7430 |
| 0.3158 | 10.0 | 150 | 0.8811 | 0.6838 | 0.7825 | 0.7331 |
| 0.281 | 11.0 | 165 | 0.9246 | 0.6863 | 0.7881 | 0.7372 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
gokuls/distilbert_add_GLUE_Experiment_cola
|
gokuls
| 2023-01-26T12:41:17Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-26T12:37:35Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert_add_GLUE_Experiment_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_add_GLUE_Experiment_cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6182
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6218 | 1.0 | 34 | 0.6182 | 0.0 |
| 0.611 | 2.0 | 68 | 0.6194 | 0.0 |
| 0.6084 | 3.0 | 102 | 0.6226 | 0.0 |
| 0.6104 | 4.0 | 136 | 0.6186 | 0.0 |
| 0.6102 | 5.0 | 170 | 0.6214 | 0.0 |
| 0.6095 | 6.0 | 204 | 0.6187 | 0.0 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
gokuls/mobilebert_add_GLUE_Experiment_cola
|
gokuls
| 2023-01-26T12:38:15Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-26T12:25:38Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: mobilebert_add_GLUE_Experiment_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_add_GLUE_Experiment_cola
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6127
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6126 | 1.0 | 67 | 0.6183 | 0.0 |
| 0.6078 | 2.0 | 134 | 0.6179 | 0.0 |
| 0.6072 | 3.0 | 201 | 0.6183 | 0.0 |
| 0.6062 | 4.0 | 268 | 0.6164 | 0.0 |
| 0.601 | 5.0 | 335 | 0.6127 | 0.0 |
| 0.5928 | 6.0 | 402 | 0.6148 | 0.0 |
| 0.588 | 7.0 | 469 | 0.6224 | 0.0 |
| 0.582 | 8.0 | 536 | 0.6174 | 0.0029 |
| 0.5807 | 9.0 | 603 | 0.6301 | 0.0029 |
| 0.5743 | 10.0 | 670 | 0.6156 | 0.0438 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
gokuls/mobilebert_add_GLUE_Experiment_cola_128
|
gokuls
| 2023-01-26T12:36:00Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-26T12:25:26Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: mobilebert_add_GLUE_Experiment_cola_128
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_add_GLUE_Experiment_cola_128
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6168
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.617 | 1.0 | 67 | 0.6181 | 0.0 |
| 0.608 | 2.0 | 134 | 0.6181 | 0.0 |
| 0.6075 | 3.0 | 201 | 0.6183 | 0.0 |
| 0.6072 | 4.0 | 268 | 0.6177 | 0.0 |
| 0.6069 | 5.0 | 335 | 0.6185 | 0.0 |
| 0.606 | 6.0 | 402 | 0.6168 | 0.0 |
| 0.6014 | 7.0 | 469 | 0.6234 | 0.0 |
| 0.5947 | 8.0 | 536 | 0.6218 | 0.0 |
| 0.5858 | 9.0 | 603 | 0.6321 | 0.0 |
| 0.579 | 10.0 | 670 | 0.6177 | 0.0464 |
| 0.5762 | 11.0 | 737 | 0.6185 | 0.0464 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
gokuls/distilbert_add_GLUE_Experiment_mrpc_256
|
gokuls
| 2023-01-26T12:34:55Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-26T12:32:22Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: distilbert_add_GLUE_Experiment_mrpc_256
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.7107843137254902
- name: F1
type: f1
value: 0.8233532934131738
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_add_GLUE_Experiment_mrpc_256
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5932
- Accuracy: 0.7108
- F1: 0.8234
- Combined Score: 0.7671
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.637 | 1.0 | 15 | 0.6242 | 0.6838 | 0.8122 | 0.7480 |
| 0.629 | 2.0 | 30 | 0.6240 | 0.6838 | 0.8122 | 0.7480 |
| 0.6302 | 3.0 | 45 | 0.6248 | 0.6838 | 0.8122 | 0.7480 |
| 0.63 | 4.0 | 60 | 0.6241 | 0.6838 | 0.8122 | 0.7480 |
| 0.6323 | 5.0 | 75 | 0.6240 | 0.6838 | 0.8122 | 0.7480 |
| 0.6299 | 6.0 | 90 | 0.6243 | 0.6838 | 0.8122 | 0.7480 |
| 0.6325 | 7.0 | 105 | 0.6239 | 0.6838 | 0.8122 | 0.7480 |
| 0.6301 | 8.0 | 120 | 0.6239 | 0.6838 | 0.8122 | 0.7480 |
| 0.6324 | 9.0 | 135 | 0.6240 | 0.6838 | 0.8122 | 0.7480 |
| 0.6293 | 10.0 | 150 | 0.6240 | 0.6838 | 0.8122 | 0.7480 |
| 0.6307 | 11.0 | 165 | 0.6239 | 0.6838 | 0.8122 | 0.7480 |
| 0.6302 | 12.0 | 180 | 0.6240 | 0.6838 | 0.8122 | 0.7480 |
| 0.6338 | 13.0 | 195 | 0.6237 | 0.6838 | 0.8122 | 0.7480 |
| 0.6281 | 14.0 | 210 | 0.6225 | 0.6838 | 0.8122 | 0.7480 |
| 0.6263 | 15.0 | 225 | 0.6183 | 0.6838 | 0.8122 | 0.7480 |
| 0.6017 | 16.0 | 240 | 0.5932 | 0.7108 | 0.8234 | 0.7671 |
| 0.5213 | 17.0 | 255 | 0.6146 | 0.6642 | 0.7540 | 0.7091 |
| 0.4383 | 18.0 | 270 | 0.6405 | 0.6912 | 0.7842 | 0.7377 |
| 0.3903 | 19.0 | 285 | 0.6910 | 0.6912 | 0.7872 | 0.7392 |
| 0.363 | 20.0 | 300 | 0.7221 | 0.6544 | 0.7374 | 0.6959 |
| 0.3306 | 21.0 | 315 | 0.7583 | 0.6863 | 0.7808 | 0.7335 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
gokuls/distilbert_add_GLUE_Experiment_mrpc_192
|
gokuls
| 2023-01-26T12:33:13Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-26T12:31:23Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: distilbert_add_GLUE_Experiment_mrpc_192
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.6838235294117647
- name: F1
type: f1
value: 0.8122270742358079
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_add_GLUE_Experiment_mrpc_192
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6238
- Accuracy: 0.6838
- F1: 0.8122
- Combined Score: 0.7480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6399 | 1.0 | 15 | 0.6240 | 0.6838 | 0.8122 | 0.7480 |
| 0.6292 | 2.0 | 30 | 0.6242 | 0.6838 | 0.8122 | 0.7480 |
| 0.6293 | 3.0 | 45 | 0.6241 | 0.6838 | 0.8122 | 0.7480 |
| 0.6308 | 4.0 | 60 | 0.6246 | 0.6838 | 0.8122 | 0.7480 |
| 0.6328 | 5.0 | 75 | 0.6238 | 0.6838 | 0.8122 | 0.7480 |
| 0.6301 | 6.0 | 90 | 0.6243 | 0.6838 | 0.8122 | 0.7480 |
| 0.6334 | 7.0 | 105 | 0.6242 | 0.6838 | 0.8122 | 0.7480 |
| 0.6297 | 8.0 | 120 | 0.6241 | 0.6838 | 0.8122 | 0.7480 |
| 0.6317 | 9.0 | 135 | 0.6242 | 0.6838 | 0.8122 | 0.7480 |
| 0.6303 | 10.0 | 150 | 0.6239 | 0.6838 | 0.8122 | 0.7480 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
gokuls/distilbert_add_GLUE_Experiment_mrpc_384
|
gokuls
| 2023-01-26T12:32:44Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-26T12:29:22Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: distilbert_add_GLUE_Experiment_mrpc_384
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.7009803921568627
- name: F1
type: f1
value: 0.8189910979228486
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_add_GLUE_Experiment_mrpc_384
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5935
- Accuracy: 0.7010
- F1: 0.8190
- Combined Score: 0.7600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6355 | 1.0 | 15 | 0.6261 | 0.6838 | 0.8122 | 0.7480 |
| 0.6315 | 2.0 | 30 | 0.6294 | 0.6838 | 0.8122 | 0.7480 |
| 0.6327 | 3.0 | 45 | 0.6241 | 0.6838 | 0.8122 | 0.7480 |
| 0.6344 | 4.0 | 60 | 0.6285 | 0.6838 | 0.8122 | 0.7480 |
| 0.6328 | 5.0 | 75 | 0.6245 | 0.6838 | 0.8122 | 0.7480 |
| 0.6293 | 6.0 | 90 | 0.6245 | 0.6838 | 0.8122 | 0.7480 |
| 0.6341 | 7.0 | 105 | 0.6239 | 0.6838 | 0.8122 | 0.7480 |
| 0.6298 | 8.0 | 120 | 0.6240 | 0.6838 | 0.8122 | 0.7480 |
| 0.6304 | 9.0 | 135 | 0.6232 | 0.6838 | 0.8122 | 0.7480 |
| 0.6286 | 10.0 | 150 | 0.6196 | 0.6838 | 0.8122 | 0.7480 |
| 0.6045 | 11.0 | 165 | 0.5935 | 0.7010 | 0.8190 | 0.7600 |
| 0.5251 | 12.0 | 180 | 0.6129 | 0.6789 | 0.7849 | 0.7319 |
| 0.4395 | 13.0 | 195 | 0.6564 | 0.6912 | 0.7872 | 0.7392 |
| 0.3921 | 14.0 | 210 | 0.7059 | 0.6446 | 0.7173 | 0.6810 |
| 0.3399 | 15.0 | 225 | 0.7605 | 0.6887 | 0.7829 | 0.7358 |
| 0.3219 | 16.0 | 240 | 0.7614 | 0.6569 | 0.7328 | 0.6948 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
gokuls/distilbert_add_GLUE_Experiment_mrpc_96
|
gokuls
| 2023-01-26T12:32:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-26T12:30:28Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: distilbert_add_GLUE_Experiment_mrpc_96
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.6838235294117647
- name: F1
type: f1
value: 0.8122270742358079
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_add_GLUE_Experiment_mrpc_96
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6239
- Accuracy: 0.6838
- F1: 0.8122
- Combined Score: 0.7480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6686 | 1.0 | 15 | 0.6467 | 0.6838 | 0.8122 | 0.7480 |
| 0.6433 | 2.0 | 30 | 0.6372 | 0.6838 | 0.8122 | 0.7480 |
| 0.6378 | 3.0 | 45 | 0.6319 | 0.6838 | 0.8122 | 0.7480 |
| 0.6344 | 4.0 | 60 | 0.6284 | 0.6838 | 0.8122 | 0.7480 |
| 0.6343 | 5.0 | 75 | 0.6266 | 0.6838 | 0.8122 | 0.7480 |
| 0.6299 | 6.0 | 90 | 0.6252 | 0.6838 | 0.8122 | 0.7480 |
| 0.6335 | 7.0 | 105 | 0.6247 | 0.6838 | 0.8122 | 0.7480 |
| 0.6308 | 8.0 | 120 | 0.6243 | 0.6838 | 0.8122 | 0.7480 |
| 0.6306 | 9.0 | 135 | 0.6243 | 0.6838 | 0.8122 | 0.7480 |
| 0.6302 | 10.0 | 150 | 0.6241 | 0.6838 | 0.8122 | 0.7480 |
| 0.6296 | 11.0 | 165 | 0.6241 | 0.6838 | 0.8122 | 0.7480 |
| 0.6305 | 12.0 | 180 | 0.6239 | 0.6838 | 0.8122 | 0.7480 |
| 0.634 | 13.0 | 195 | 0.6242 | 0.6838 | 0.8122 | 0.7480 |
| 0.63 | 14.0 | 210 | 0.6243 | 0.6838 | 0.8122 | 0.7480 |
| 0.6314 | 15.0 | 225 | 0.6242 | 0.6838 | 0.8122 | 0.7480 |
| 0.6286 | 16.0 | 240 | 0.6239 | 0.6838 | 0.8122 | 0.7480 |
| 0.6326 | 17.0 | 255 | 0.6242 | 0.6838 | 0.8122 | 0.7480 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
gokuls/distilbert_add_GLUE_Experiment_cola_256
|
gokuls
| 2023-01-26T12:31:48Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-26T12:28:34Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert_add_GLUE_Experiment_cola_256
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_add_GLUE_Experiment_cola_256
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6181
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6125 | 1.0 | 34 | 0.6201 | 0.0 |
| 0.6084 | 2.0 | 68 | 0.6182 | 0.0 |
| 0.6071 | 3.0 | 102 | 0.6184 | 0.0 |
| 0.6081 | 4.0 | 136 | 0.6186 | 0.0 |
| 0.6081 | 5.0 | 170 | 0.6182 | 0.0 |
| 0.607 | 6.0 | 204 | 0.6185 | 0.0 |
| 0.6082 | 7.0 | 238 | 0.6181 | 0.0 |
| 0.609 | 8.0 | 272 | 0.6184 | 0.0 |
| 0.607 | 9.0 | 306 | 0.6213 | 0.0 |
| 0.6082 | 10.0 | 340 | 0.6193 | 0.0 |
| 0.6081 | 11.0 | 374 | 0.6196 | 0.0 |
| 0.6071 | 12.0 | 408 | 0.6193 | 0.0 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
gokuls/distilbert_add_GLUE_Experiment_cola_192
|
gokuls
| 2023-01-26T12:30:54Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-26T12:27:38Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert_add_GLUE_Experiment_cola_192
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_add_GLUE_Experiment_cola_192
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6182
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6141 | 1.0 | 34 | 0.6201 | 0.0 |
| 0.6079 | 2.0 | 68 | 0.6185 | 0.0 |
| 0.6072 | 3.0 | 102 | 0.6184 | 0.0 |
| 0.6083 | 4.0 | 136 | 0.6193 | 0.0 |
| 0.6075 | 5.0 | 170 | 0.6182 | 0.0 |
| 0.607 | 6.0 | 204 | 0.6185 | 0.0 |
| 0.6082 | 7.0 | 238 | 0.6182 | 0.0 |
| 0.6085 | 8.0 | 272 | 0.6185 | 0.0 |
| 0.608 | 9.0 | 306 | 0.6202 | 0.0 |
| 0.6084 | 10.0 | 340 | 0.6189 | 0.0 |
| 0.6078 | 11.0 | 374 | 0.6189 | 0.0 |
| 0.6072 | 12.0 | 408 | 0.6186 | 0.0 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Heerak/xlm-roberta-base-finetuned-panx-fr
|
Heerak
| 2023-01-26T12:26:14Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-01-26T11:18:02Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8370531968451083
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2777
- F1: 0.8371
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 191 | 0.3122 | 0.7961 |
| 0.4151 | 2.0 | 382 | 0.2749 | 0.8312 |
| 0.4151 | 3.0 | 573 | 0.2777 | 0.8371 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
andyleow/q-FrozenLake-v1-4x4
|
andyleow
| 2023-01-26T12:15:58Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-26T12:15:55Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.58 +/- 0.49
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="andyleow/q-FrozenLake-v1-4x4", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
haouarin/TextGeneration
|
haouarin
| 2023-01-26T11:28:02Z | 0 | 0 | null |
[
"pytorch",
"bert",
"multilingual",
"ar",
"dz",
"license:apache-2.0",
"region:us"
] | null | 2023-01-26T10:43:33Z |
---
language:
- ar
- dz
tags:
- pytorch
- bert
- multilingual
- ar
- dz
license: apache-2.0
widget:
- text: " أنا من الجزائر من ولاية [MASK] "
- text: "rabi [MASK] khouya sami"
- text: " ربي [MASK] خويا لعزيز"
- text: "tahya el [MASK]."
- text: "rouhi ya dzayer [MASK]"
inference: true
---
|
umass/mpnet-base-mimics-query-facet-encoder
|
umass
| 2023-01-26T10:56:47Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-01-26T10:53:10Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 18092 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'dot_score'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
arnonl/a2c-PandaReachDense-v2
|
arnonl
| 2023-01-26T10:39:57Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-26T10:37:45Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.77 +/- 0.93
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
andrei-saceleanu/a2c-PandaReachDense-v2
|
andrei-saceleanu
| 2023-01-26T10:07:50Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-23T16:00:16Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.04 +/- 0.34
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
css919/ppo-SnowballTarget
|
css919
| 2023-01-26T09:58:44Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-01-26T09:58:37Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: css919/ppo-SnowballTarget1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
phob0s/bert-tiny
|
phob0s
| 2023-01-26T09:55:34Z | 502 | 1 |
transformers
|
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | 2023-01-19T08:53:21Z |
Testclone of https://huggingface.co/prajjwal1/bert-tiny
Mentioned in
* Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics(Bhargava, Drozd and Rogers)
* Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation (Turc et al.)
|
arnonl/a2c-AntBulletEnv-v0
|
arnonl
| 2023-01-26T09:52:27Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-26T09:51:25Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1234.54 +/- 172.57
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
orenk/a2c-AntBulletEnv-v0
|
orenk
| 2023-01-26T09:50:07Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-26T09:48:58Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1400.57 +/- 347.64
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
shahriarebrampour/distilbert-base-uncased-finetuned-imdb
|
shahriarebrampour
| 2023-01-26T09:31:10Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-01-26T09:05:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5274 | 1.0 | 157 | 2.4476 |
| 2.5259 | 2.0 | 314 | 2.4390 |
| 2.5134 | 3.0 | 471 | 2.4330 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
MaCoCu/XLMR-MaltBERTa
|
MaCoCu
| 2023-01-26T09:18:53Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"xlm-roberta",
"feature-extraction",
"MaltBERTa",
"MaCoCu",
"mt",
"license:cc0-1.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-08-11T12:57:46Z |
---
license: cc0-1.0
language:
- mt
tags:
- MaltBERTa
- MaCoCu
---
# Model description
**XLMR-MaltBERTa** is a large pre-trained language model trained on Maltese texts. It was created by continuing training from the [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) model. It was developed as part of the [MaCoCu](https://macocu.eu/) project. The main developer is [Rik van Noord](https://www.rikvannoord.nl/) from the University of Groningen.
XLMR-MaltBERTa was trained on 3.2GB of text, which is equal to 439M tokens. It was trained for 50,000 steps with a batch size of 1,024. It uses the same vocabulary as the original XLMR-large model. The model is trained on the same data as [MaltBERTa](https://huggingface.co/RVN/MaltBERTa), but this model was trained from scratch using the RoBERTa architecture.
The training and fine-tuning procedures are described in detail on our [Github repo](https://github.com/macocu/LanguageModels).
# How to use
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("RVN/XLMR-MaltBERTa")
model = AutoModel.from_pretrained("RVN/XLMR-MaltBERTa") # PyTorch
model = TFAutoModel.from_pretrained("RVN/XLMR-MaltBERTa") # Tensorflow
```
# Data
For training, we used all Maltese data that was present in the [MaCoCu](https://macocu.eu/), Oscar and mc4 corpora. After de-duplicating the data, we were left with a total of 3.2GB of text.
# Benchmark performance
We tested the performance of MaltBERTa on the UPOS and XPOS benchmark of the [Universal Dependencies](https://universaldependencies.org/) project. Moreover, we test on a Google Translated version of the COPA data set (see our [Github repo](https://github.com/RikVN/COPA) for details). We compare performance to the strong multi-lingual models XLMR-base and XLMR-large, though note that Maltese was not one of the training languages for those models. We also compare to the recently introduced Maltese language models [BERTu](https://huggingface.co/MLRS/BERTu), [mBERTu](https://huggingface.co/MLRS/mBERTu) and our own [MaltBERTa](https://huggingface.co/RVN/MaltBERTa). For details regarding the fine-tuning procedure you can checkout our [Github](https://github.com/macocu/LanguageModels).
Scores are averages of three runs for UPOS/XPOS and 10 runs for COPA. We use the same hyperparameter settings for all models for UPOS/XPOS, while for COPA we optimize on the dev set.
| | **UPOS** | **UPOS** | **XPOS** | **XPOS** | **COPA** |
|-----------------|:--------:|:--------:|:--------:|:--------:| :--------:|
| | **Dev** | **Test** | **Dev** | **Test** | **Test** |
| **XLM-R-base** | 93.6 | 93.2 | 93.4 | 93.2 | 52.2 |
| **XLM-R-large** | 94.9 | 94.4 | 95.1 | 94.7 | 54.0 |
| **BERTu** | 97.5 | 97.6 | 95.7 | 95.8 | **55.6** |
| **mBERTu** | **97.7** | 97.8 | 97.9 | 98.1 | 52.6 |
| **MaltBERTa** | 95.7 | 95.8 | 96.1 | 96.0 | 53.7 |
| **XLMR-MaltBERTa** | **97.7** | **98.1** | **98.1** | **98.2** | 54.4 |
# Acknowledgements
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC). The authors received funding from the European Union’s Connecting Europe Facility 2014-
2020 - CEF Telecom, under Grant Agreement No.INEA/CEF/ICT/A2020/2278341 (MaCoCu).
# Citation
If you use this model, please cite the following paper:
```bibtex
@inproceedings{non-etal-2022-macocu,
title = "{M}a{C}o{C}u: Massive collection and curation of monolingual and bilingual data: focus on under-resourced languages",
author = "Ba{\~n}{\'o}n, Marta and
Espl{\`a}-Gomis, Miquel and
Forcada, Mikel L. and
Garc{\'\i}a-Romero, Cristian and
Kuzman, Taja and
Ljube{\v{s}}i{\'c}, Nikola and
van Noord, Rik and
Sempere, Leopoldo Pla and
Ram{\'\i}rez-S{\'a}nchez, Gema and
Rupnik, Peter and
Suchomel, V{\'\i}t and
Toral, Antonio and
van der Werff, Tobias and
Zaragoza, Jaume",
booktitle = "Proceedings of the 23rd Annual Conference of the European Association for Machine Translation",
month = jun,
year = "2022",
address = "Ghent, Belgium",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2022.eamt-1.41",
pages = "303--304"
}
```
|
nanashisan/LoRa_Deedlit
|
nanashisan
| 2023-01-26T08:51:57Z | 0 | 10 | null |
[
"ja",
"region:us"
] | null | 2023-01-26T06:47:25Z |
---
language:
- ja
duplicated_from: nanashisan/LoRa_Deedlit
---
プロンプト用KeyWord:Deedlit
- Deedlit, 1girl, retro artstyle, solo, pointy ears, 1990s (style), weapon, elf, sword, armor, cape
%2C(masterpiece)%2C%20((an%20extremely%20detailed%20and%20delicate))%2C%20(8k%20cg%20wallpaper)%2C%20(amazing)%2Coriginal%2C(extre.png)

|
leadawon/ko-gangwon-nmt-v1
|
leadawon
| 2023-01-26T08:33:33Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-01-26T01:20:25Z |
---
model-index:
- name: ko-gangwon-nmt-v1
results: []
language:
- ko
pipeline_tag: text2text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jeolla-ko-nmt-v1
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.199624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.525000 | 1.0 | 3017 | 0.424946 |
| 0.318700 | 2.0 | 6034 | 0.285191 |
| 0.244100 | 3.0 | 9051 | 0.237215 |
| 0.195900 | 4.0 | 12068 | 0.216691 |
| 0.160500 | 5.0 | 15085 | 0.203532 |
| 0.135400 | 6.0 | 18092 | 0.199624 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
|
edusei/sentiment_analysis_on_covid_tweets
|
edusei
| 2023-01-26T08:23:17Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-26T07:58:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: sentiment_analysis_on_covid_tweets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment_analysis_on_covid_tweets
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.5883
- eval_accuracy: 0.771
- eval_runtime: 33.4887
- eval_samples_per_second: 59.722
- eval_steps_per_second: 7.465
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
charanhu/text_to_sql_1
|
charanhu
| 2023-01-26T07:52:04Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain",
"translation",
"unk",
"dataset:charanhu/autotrain-data-text_to_sql_finetune",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-01-26T07:40:12Z |
---
tags:
- autotrain
- translation
language:
- unk
- unk
datasets:
- charanhu/autotrain-data-text_to_sql_finetune
co2_eq_emissions:
emissions: 16.03787641705279
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 3073487571
- CO2 Emissions (in grams): 16.0379
## Validation Metrics
- Loss: 0.140
- SacreBLEU: 77.653
- Gen len: 42.019
|
charanhu/text_to_sql_4
|
charanhu
| 2023-01-26T07:51:46Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain",
"translation",
"unk",
"dataset:charanhu/autotrain-data-text_to_sql_finetune",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-01-26T07:40:13Z |
---
tags:
- autotrain
- translation
language:
- unk
- unk
datasets:
- charanhu/autotrain-data-text_to_sql_finetune
co2_eq_emissions:
emissions: 15.216605611144294
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 3073487569
- CO2 Emissions (in grams): 15.2166
## Validation Metrics
- Loss: 0.159
- SacreBLEU: 72.889
- Gen len: 40.580
|
hmehta92/finetuned-model
|
hmehta92
| 2023-01-26T07:40:12Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-01-26T07:37:15Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1604 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MegaBatchMarginLoss.MegaBatchMarginLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 802,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
carlosmirandad/rl-class-dqn-SpaceInvadersNoFrameskip-v4
|
carlosmirandad
| 2023-01-26T07:25:05Z | 6 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-24T09:09:33Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 531.50 +/- 134.70
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga carlosmirandad -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga carlosmirandad -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga carlosmirandad
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0005),
('learning_starts', 100000),
('n_timesteps', 5000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
xiaozhangMJXXZ/Genshin-lora-all
|
xiaozhangMJXXZ
| 2023-01-26T07:23:12Z | 0 | 77 | null |
[
"region:us"
] | null | 2023-01-22T16:55:05Z |
https://t.me/+a-k8rVfjIVk3NGU1
https://t.me/loraeveryone
这是tg群组,之后会在第一时间更新tg,因为tg可以直接传tg原文件呜呜呜,笑脸站会缓慢更新!
笑脸上下载不下来的也可以直接来tg下载
这里是原神角色的lora合集,希望各位可以及时来补充!!!
分别为打包全下载与单个角色,由于中文名字的文件无法下载所以是压缩包的形式,
下载之后需要各位解压一下里面就有对应的中文名字了。
校 长的联系方式:qq3062945846
只是为了方便中文玩家而搬运整理!!
记得查看txt角色触发词
我们十分尊敬每一位lora的作者!!
感谢你们的付出!!
大家好这里是校长,目前这边准备来整合质量高些的lora模型, 已经是整理了70+并且给打上了中文标注以及把触发tag直接打到了文件名字上, 有些复杂的衣物装饰什么的还在旁边附带了同名的文档可以方便查阅。 如果大家有比较好的且跟目前的不同的lora的话,
希望可以来找咱发下Lora模型, 我把它们全部都统一整理完之后进行分类整理并且分享给大家(是lora模型哦,不是平常的大模型)。
|
Heerak/xlm-roberta-base-finetuned-panx-de-fr
|
Heerak
| 2023-01-26T07:21:31Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-01-26T06:05:18Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1637
- F1: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 715 | 0.2046 | 0.8109 |
| 0.2163 | 2.0 | 1430 | 0.1678 | 0.8467 |
| 0.2163 | 3.0 | 2145 | 0.1637 | 0.8621 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
smile3634/jeju-ko-nmt-v7
|
smile3634
| 2023-01-26T06:57:15Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-01-25T01:41:53Z |
---
tags:
- generated_from_trainer
model-index:
- name: jeju-ko-nmt-v7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jeju-ko-nmt-v7
This model is a fine-tuned version of [leadawon/jeju-ko-nmt-v6](https://huggingface.co/leadawon/jeju-ko-nmt-v6) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
|
Korakoe/Koromiko-Diffusion
|
Korakoe
| 2023-01-26T06:12:00Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-01-26T06:12:00Z |
---
license: creativeml-openrail-m
---
|
hesw23168/SD_Elysium_Kuro_Model
|
hesw23168
| 2023-01-26T05:25:03Z | 0 | 34 | null |
[
"license:openrail",
"region:us"
] | null | 2023-01-25T03:48:50Z |
---
license: openrail
---
Also on https://civitai.com/models/5301/elysium-kuro-anime
Anime model is custom mix + finetune on dataset of high quality images (mix including Anything 4.0, WD 1.4 Booru, Seek Art Mega V1) and contains the contains the kl-f8-anime2 VAE from Waifu Diffusion.
Example settings:
Negative prompt: (lowres:1.1), (worst quality:1.2), (low quality:1.1), bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, normal quality, jpeg artifacts, signature, watermark, username, blurry
(General model): Clip skip 1, VAE: 'vae-ft-mse-840000' from StabilityAI (included)
(Anime model): Clip skip 2, VAE: 'kl-f8-anime2.ckpt' from Waifu Diffusion (included)
Example images from anime model:

General model coming soon.
|
FredZhang7/google-safesearch-mini-tfjs
|
FredZhang7
| 2023-01-26T03:48:45Z | 1 | 2 |
tf-keras
|
[
"tf-keras",
"pytorch",
"inceptionv3",
"safety-checker",
"tensorflow",
"node.js",
"image-classification",
"custom_code",
"license:creativeml-openrail-m",
"region:us"
] |
image-classification
| 2022-12-23T04:36:19Z |
---
license: creativeml-openrail-m
tags:
- safety-checker
- tensorflow
- node.js
pipeline_tag: image-classification
---
# Google Safesearch Mini Model Card
<a href="https://huggingface.co/FredZhang7/google-safesearch-mini-v2"> <font size="4"> <bold> Version 2 is here! </bold> </font> </a>
This model is trained on 2,220,000+ images scraped from Google Images, Reddit, Imgur, and Github.
The InceptionV3 and Xception models have been fine-tuned to predict the likelihood of an image falling into one of three categories: nsfw_gore, nsfw_suggestive, and safe.
After 20 epochs on PyTorch, the finetuned InceptionV3 model achieves 94% acc on both training and test data. After 3.3 epochs on Keras, the finetuned Xception model scores 94% acc on training set and 92% on test set.
Not only is this model accurate, but it also offers a significant advantage over stable diffusion safety checkers. By using our model, users can save 1.12GB of RAM and disk space.
<br>
# PyTorch
The PyTorch model runs much slower with transformers, so downloading it externally is a better option.
```bash
pip install --upgrade torchvision
```
```python
import torch, os, warnings, requests
from io import BytesIO
from PIL import Image
from urllib.request import urlretrieve
from torchvision import transforms
PATH_TO_IMAGE = 'https://images.unsplash.com/photo-1594568284297-7c64464062b1'
USE_CUDA = False
warnings.filterwarnings("ignore")
def download_model():
print("Downloading google_safesearch_mini.bin...")
urlretrieve("https://huggingface.co/FredZhang7/google-safesearch-mini/resolve/main/pytorch_model.bin", "google_safesearch_mini.bin")
def eval():
if not os.path.exists("google_safesearch_mini.bin"):
download_model()
model = torch.jit.load('./google_safesearch_mini.bin')
img = Image.open(PATH_TO_IMAGE).convert('RGB') if not (PATH_TO_IMAGE.startswith('http://') or PATH_TO_IMAGE.startswith('https://')) else Image.open(BytesIO(requests.get(PATH_TO_IMAGE).content)).convert('RGB')
transform = transforms.Compose([transforms.Resize(299), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
img = transform(img).unsqueeze(0)
if USE_CUDA:
img, model = img.cuda(), model.cuda()
else:
img, model = img.cpu(), model.cpu()
model.eval()
with torch.no_grad():
out, _ = model(img)
_, predicted = torch.max(out.data, 1)
classes = {0: 'nsfw_gore', 1: 'nsfw_suggestive', 2: 'safe'}
# account for edge cases
if predicted[0] != 2 and abs(out[0][2] - out[0][predicted[0]]) > 0.20:
img = Image.new('RGB', image.size, color = (0, 255, 255))
print("\033[93m" + "safe" + "\033[0m")
else:
print('\n\033[1;31m' + classes[predicted.item()] + '\033[0m' if predicted.item() != 2 else '\033[1;32m' + classes[predicted.item()] + '\033[0m\n')
if __name__ == '__main__':
eval()
```
Output Example:

<br>
# Keras
```python
import tensorflow as tf
from PIL import Image
import requests, os
# download the model
url = "https://huggingface.co/FredZhang7/google-safesearch-mini/resolve/main/tensorflow/saved_model.pb"
r = requests.get(url, allow_redirects=True)
if not os.path.exists('tensorflow'):
os.makedirs('tensorflow')
open('tensorflow/saved_model.pb', 'wb').write(r.content)
# download the variables
url = "https://huggingface.co/FredZhang7/google-safesearch-mini/resolve/main/tensorflow/variables/variables.data-00000-of-00001"
r = requests.get(url, allow_redirects=True)
if not os.path.exists('tensorflow/variables'):
os.makedirs('tensorflow/variables')
open('tensorflow/variables/variables.data-00000-of-00001', 'wb').write(r.content)
url = "https://huggingface.co/FredZhang7/google-safesearch-mini/resolve/main/tensorflow/variables/variables.index"
r = requests.get(url, allow_redirects=True)
open('tensorflow/variables/variables.index', 'wb').write(r.content)
# load the model
model = tf.saved_model.load('./tensorflow')
image = Image.open('cat.jpg')
image = image.resize((299, 299))
image = tf.convert_to_tensor(image)
image = tf.expand_dims(image, 0)
# run the model
tensor = model(image)
classes = ['nsfw_gore', 'nsfw_suggestive', 'safe']
prediction = classes[tf.argmax(tensor, 1)[0]]
print('\033[1;32m' + prediction + '\033[0m' if prediction == 'safe' else '\033[1;33m' + prediction + '\033[0m')
```
Output Example:

<br>
# Tensorflow.js
```bash
npm i @tensorflow/tfjs-node
```
```javascript
const tf = require('@tensorflow/tfjs-node');
const fs = require('fs');
const { pipeline } = require('stream');
const { promisify } = require('util');
const download = async (url, path) => {
// Taken from https://levelup.gitconnected.com/how-to-download-a-file-with-node-js-e2b88fe55409
const streamPipeline = promisify(pipeline);
const response = await fetch(url);
if (!response.ok) {
throw new Error(`unexpected response ${response.statusText}`);
}
await streamPipeline(response.body, fs.createWriteStream(path));
};
async function run() {
// download saved model and variables from https://huggingface.co/FredZhang7/google-safesearch-mini/tree/main/tensorflow
if (!fs.existsSync('tensorflow')) {
fs.mkdirSync('tensorflow');
await download('https://huggingface.co/FredZhang7/google-safesearch-mini/resolve/main/tensorflow/saved_model.pb', 'tensorflow/saved_model.pb');
fs.mkdirSync('tensorflow/variables');
await download('https://huggingface.co/FredZhang7/google-safesearch-mini/resolve/main/tensorflow/variables/variables.data-00000-of-00001', 'tensorflow/variables/variables.data-00000-of-00001');
await download('https://huggingface.co/FredZhang7/google-safesearch-mini/resolve/main/tensorflow/variables/variables.index', 'tensorflow/variables/variables.index');
}
// load model and image
const model = await tf.node.loadSavedModel('./tensorflow/');
const image = tf.node.decodeImage(fs.readFileSync('cat.jpg'), 3);
// predict
const input = tf.expandDims(image, 0);
const tensor = model.predict(input);
const max = tensor.argMax(1);
const classes = ['nsfw_gore', 'nsfw_suggestive', 'safe'];
console.log('\x1b[32m%s\x1b[0m', classes[max.dataSync()[0]], '\n');
}
run();
```
Output Example:

<br>
# Bias and Limitations
Each person's definition of "safe" is different. The images in the dataset are classified as safe/unsafe by Google SafeSearch, Reddit, and Imgur.
It is possible that some images may be safe to others but not to you. Also, when a model encounters an image with things it hasn't seen, it likely makes wrong predictions.
This is why in the PyTorch example, I accounted for the "edge cases" before printing the predictions.
|
gokuls/mobilebert_sa_GLUE_Experiment_mnli_256
|
gokuls
| 2023-01-26T03:03:25Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-25T16:30:13Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: mobilebert_sa_GLUE_Experiment_mnli_256
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
config: mnli
split: validation_matched
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.6030309194467046
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_mnli_256
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8790
- Accuracy: 0.6030
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0008 | 1.0 | 3068 | 0.9490 | 0.5405 |
| 0.9205 | 2.0 | 6136 | 0.9166 | 0.5675 |
| 0.8928 | 3.0 | 9204 | 0.9022 | 0.5786 |
| 0.872 | 4.0 | 12272 | 0.8843 | 0.5967 |
| 0.8531 | 5.0 | 15340 | 0.8807 | 0.5959 |
| 0.8359 | 6.0 | 18408 | 0.8763 | 0.5999 |
| 0.8197 | 7.0 | 21476 | 0.8815 | 0.6009 |
| 0.8028 | 8.0 | 24544 | 0.9012 | 0.5934 |
| 0.786 | 9.0 | 27612 | 0.8633 | 0.6191 |
| 0.769 | 10.0 | 30680 | 0.8734 | 0.6098 |
| 0.752 | 11.0 | 33748 | 0.8682 | 0.6220 |
| 0.736 | 12.0 | 36816 | 0.8741 | 0.6175 |
| 0.7204 | 13.0 | 39884 | 0.8994 | 0.6048 |
| 0.7038 | 14.0 | 42952 | 0.8940 | 0.6079 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
starcel/asr-conformer-kdialectspeech
|
starcel
| 2023-01-26T02:54:57Z | 2 | 1 |
speechbrain
|
[
"speechbrain",
"automatic-speech-recognition",
"ko",
"license:apache-2.0",
"region:us"
] |
automatic-speech-recognition
| 2023-01-26T01:32:28Z |
---
license: apache-2.0
language:
- ko
metrics:
- cer
- wer
library_name: speechbrain
pipeline_tag: automatic-speech-recognition
---
이 모델은 2022년 인공지능 학습용 데이터 구축 사업 <18 중노년층 방언 데이터>의 데이터 셋을 사용하여 Conformer ASR 모델을 훈련한 모델 파일입니다.
|
Tristan/gpt2-summarization_reward_model
|
Tristan
| 2023-01-26T02:47:58Z | 0 | 0 | null |
[
"pytorch",
"generated_from_trainer",
"license:mit",
"region:us"
] | null | 2023-01-23T20:45:09Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: gpt2-summarization_reward_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-summarization_reward_model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7473
- Accuracy: 0.6006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6421 | 1.0 | 1451 | 0.6815 | 0.6036 |
| 0.5893 | 2.0 | 2902 | 0.6764 | 0.6048 |
| 0.5488 | 3.0 | 4353 | 0.7074 | 0.6012 |
| 0.5187 | 4.0 | 5804 | 0.7254 | 0.6009 |
| 0.5034 | 5.0 | 7255 | 0.7473 | 0.6006 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
leenw2/ppo-LunarLander-nw
|
leenw2
| 2023-01-26T02:31:22Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-26T02:30:28Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.76 +/- 24.01
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
facebook/opt-iml-max-1.3b
|
facebook
| 2023-01-26T01:31:38Z | 9,572 | 44 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"arxiv:2212.12017",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-01-26T00:08:30Z |
---
inference: false
tags:
- text-generation
- opt
license: other
commercial: false
---
# OPT-IML
## Model Description
[OPT-IML (OPT + Instruction Meta-Learning)](https://arxiv.org/abs/2212.12017) is a set of instruction-tuned versions of OPT, on a collection of ~2000 NLP tasks gathered from 8 NLP benchmarks, called OPT-IML Bench.
We provide two model versions:
* OPT-IML trained on 1500 tasks with several tasks held-out for purposes of downstream evaluation, and
* OPT-IML-Max trained on all ~2000 tasks
### How to use
You can use this model directly with a pipeline for text generation.
```python
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model="facebook/opt-iml-max-1.3b")
>>> generator("What is the capital of USA?")
```
### Limitations and bias
While OPT-IML models outperform baseline OPT on an extensive set of evaluations,
nevertheless, they are susceptible to the various risks associated with using large language models
relating to factual correctness, generation of toxic language and enforcing stereotypes. While we release our
OPT-IML models to proliferate future work on instruction-tuning and to improve the availability
of large instruction-tuned causal LMs, the use of these models should be
accompanied with responsible best practices.
## Training data
OPT-IML models are trained on OPT-IML Bench, a large benchmark for Instruction MetaLearning (IML) of 2000 NLP tasks consolidated into task categories from 8 existing benchmarks include Super-NaturalInstructions, FLAN, PromptSource, etc.
## Training procedure
The texts are tokenized using the GPT2 byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens.
The 30B model was fine-tuned on 64 40GB A100 GPUs. During fine-tuning, models saw approximately 2 billion tokens, which is only 0.6% of the pre-training
budget of OPT.
### BibTeX entry and citation info
```bibtex
@misc{iyer2022opt,
title={OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization},
author={Iyer, Srinivasan and Lin, Xi Victoria and Pasunuru, Ramakanth and Mihaylov, Todor and Simig, D{\'a}niel and Yu, Ping and Shuster, Kurt and Wang, Tianlu and Liu, Qing and Koura, Punit Singh and others},
year={2022},
eprint={2212.12017},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
OpenAssistant/reward-model-deberta-v3-base
|
OpenAssistant
| 2023-01-26T01:07:57Z | 711 | 10 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"reward-model",
"reward_model",
"RLHF",
"en",
"dataset:openai/webgpt_comparisons",
"dataset:openai/summarize_from_feedback",
"dataset:Dahoas/instruct-synthetic-prompt-responses",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-15T11:06:39Z |
---
license: mit
datasets:
- openai/webgpt_comparisons
- openai/summarize_from_feedback
- Dahoas/instruct-synthetic-prompt-responses
language:
- en
metrics:
- accuracy
tags:
- reward-model
- reward_model
- RLHF
---
# Reward model trained from human feedback
Reward model (RM) trained to predict which generated answer is better judged by a human, given a question.
RM are useful in these domain:
- QA model evaluation
- serves as reward score in RLHF
All models are train on these dataset with a same split seed across datasets (if validation split wasn't available)
- [webgpt_comparisons](https://huggingface.co/datasets/openai/webgpt_comparisons)
- [summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback)
- [synthetic-instruct-gptj-pairwise](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise)
# How to use
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
reward_name = "OpenAssistant/reward-model-deberta-v3-base"
rank_model, tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name)
question, answer = "Explain nuclear fusion like I am five", "Nuclear fusion is the process by which two or more protons and neutrons combine to form a single nucleus. It is a very important process in the universe, as it is the source of energy for stars and galaxies. Nuclear fusion is also a key process in the production of energy for nuclear power plants."
inputs = tokenizer(question, answer, return_tensors='pt')
score = rank_model(**inputs).logits[0].cpu().detach()
print(score)
```
# Performance
Validation split accuracy
| Model | [WebGPT](https://huggingface.co/datasets/openai/webgpt_comparisons) | [Summary](https://huggingface.co/datasets/openai/summarize_from_feedback) | [SytheticGPT](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise) |
|---|---|---|---|
| [electra-large-discriminator](https://huggingface.co/OpenAssistant/reward-model-electra-large-discriminator) | 59.30 | 68.66 | 99.85 |
| [deberta-v3-large](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-large) | 61.13 | 72.23 | 99.94 |
| [deberta-v3-base](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-base) | 59.07 | 66.84 | 99.85 |
Its likely SytheticGPT has somekind of surface pattern on the choosen-rejected pair which makes it trivial to differentiate between better the answer.
|
IkariDev/Xynaptix
|
IkariDev
| 2023-01-26T00:41:49Z | 0 | 3 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-01-12T14:11:57Z |
---
license: creativeml-openrail-m
---
|
mrm8488/xlm-roberta-large-finetuned-HC3-mix
|
mrm8488
| 2023-01-26T00:38:21Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"doi:10.57967/hf/0305",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-25T14:04:10Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-large-finetuned-HC3-mix
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-finetuned-HC3-mix
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6998
- F1: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:------:|:---------------:|:---:|
| 0.6506 | 1.0 | 35824 | 0.6998 | 0.0 |
| 0.6481 | 2.0 | 71648 | 0.7662 | 0.0 |
| 0.6391 | 3.0 | 107472 | 0.7492 | 0.0 |
| 0.6396 | 4.0 | 143296 | 0.7358 | 0.0 |
| 0.6366 | 5.0 | 179120 | 0.7259 | 0.0 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Luis988/Generador
|
Luis988
| 2023-01-26T00:03:07Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-01-26T00:03:07Z |
---
license: creativeml-openrail-m
---
|
cdefghijkl/anime-m-series-vol1
|
cdefghijkl
| 2023-01-25T23:39:52Z | 0 | 3 | null |
[
"text-to-image",
"stable-diffusion",
"en",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-01-13T17:48:10Z |
---
license: creativeml-openrail-m
language:
- en
tags:
- text-to-image
- stable-diffusion
---
A collection of anime models merged by me. Will update info and examples later.
|
gustavecortal/roberta_emo
|
gustavecortal
| 2023-01-25T23:16:31Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-22T19:33:20Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta_emo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_emo
This model is a fine-tuned version of [ibm/ColD-Fusion](https://huggingface.co/ibm/ColD-Fusion) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2
## Model Recycling
[Evaluation on 36 datasets](https://ibm.github.io/model-recycling/model_gain_chart?avg=2.24&mnli_lp=nan&20_newsgroup=0.54&ag_news=0.46&amazon_reviews_multi=-0.50&anli=1.81&boolq=2.93&cb=21.52&cola=-0.12&copa=22.30&dbpedia=0.20&esnli=-0.30&financial_phrasebank=0.99&imdb=-0.12&isear=0.54&mnli=-0.16&mrpc=0.37&multirc=2.85&poem_sentiment=4.52&qnli=0.47&qqp=0.24&rotten_tomatoes=2.95&rte=10.99&sst2=1.64&sst_5bins=0.79&stsb=1.59&trec_coarse=0.09&trec_fine=3.44&tweet_ev_emoji=-0.31&tweet_ev_emotion=0.65&tweet_ev_hate=-0.40&tweet_ev_irony=4.08&tweet_ev_offensive=2.08&tweet_ev_sentiment=-0.16&wic=3.02&wnli=-8.31&wsc=0.19&yahoo_answers=-0.14&model_name=gustavecortal%2Froberta_emo&base_name=roberta-base) using gustavecortal/roberta_emo as a base model yields average score of 78.47 in comparison to 76.22 by roberta-base.
The model is ranked 2nd among all tested models for the roberta-base architecture as of 18/01/2023
Results:
| 20_newsgroup | ag_news | amazon_reviews_multi | anli | boolq | cb | cola | copa | dbpedia | esnli | financial_phrasebank | imdb | isear | mnli | mrpc | multirc | poem_sentiment | qnli | qqp | rotten_tomatoes | rte | sst2 | sst_5bins | stsb | trec_coarse | trec_fine | tweet_ev_emoji | tweet_ev_emotion | tweet_ev_hate | tweet_ev_irony | tweet_ev_offensive | tweet_ev_sentiment | wic | wnli | wsc | yahoo_answers |
|---------------:|----------:|-----------------------:|--------:|--------:|--------:|--------:|-------:|----------:|--------:|-----------------------:|-------:|--------:|--------:|--------:|----------:|-----------------:|--------:|--------:|------------------:|--------:|--------:|------------:|--------:|--------------:|------------:|-----------------:|-------------------:|----------------:|-----------------:|---------------------:|---------------------:|--------:|--------:|--------:|----------------:|
| 85.8205 | 90.2333 | 66.08 | 52.1563 | 81.6208 | 89.2857 | 83.4132 | 71 | 77.5 | 90.6963 | 86.1 | 93.776 | 73.0117 | 86.8186 | 88.2353 | 64.0677 | 88.4615 | 92.8794 | 90.9523 | 91.3696 | 83.3935 | 95.7569 | 57.4661 | 91.5106 | 97.2 | 91.2 | 45.994 | 82.4771 | 52.4916 | 75.6378 | 86.6279 | 70.8727 | 68.4953 | 46.4789 | 63.4615 | 72.2667 |
For more information, see: [Model Recycling](https://ibm.github.io/model-recycling/)
|
Periramm/q-taxi
|
Periramm
| 2023-01-25T22:57:13Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-25T22:57:04Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Periramm/q-taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Periramm/q-frozlake
|
Periramm
| 2023-01-25T22:54:51Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-25T22:54:43Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-frozlake
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Periramm/q-frozlake", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
andyleow/q-Taxi-v3
|
andyleow
| 2023-01-25T22:53:25Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-25T22:53:23Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.38 +/- 2.85
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="andyleow/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
gokuls/mobilebert_sa_GLUE_Experiment_sst2_128
|
gokuls
| 2023-01-25T22:07:13Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-25T21:21:54Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: mobilebert_sa_GLUE_Experiment_sst2_128
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8004587155963303
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_sst2_128
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4330
- Accuracy: 0.8005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5124 | 1.0 | 527 | 0.4330 | 0.8005 |
| 0.2842 | 2.0 | 1054 | 0.4711 | 0.8028 |
| 0.2267 | 3.0 | 1581 | 0.4593 | 0.7982 |
| 0.2025 | 4.0 | 2108 | 0.7141 | 0.7856 |
| 0.1849 | 5.0 | 2635 | 0.4771 | 0.7982 |
| 0.1754 | 6.0 | 3162 | 0.6028 | 0.7901 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
sd-concepts-library/geggin21
|
sd-concepts-library
| 2023-01-25T22:06:22Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-01-25T22:06:18Z |
---
license: mit
---
### Geggin21 on Stable Diffusion
This is the `<geggin>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:










|
ATSiem/sd-class-butterflies-32
|
ATSiem
| 2023-01-25T22:04:56Z | 0 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-01-25T22:04:28Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('ATSiem/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
JYC333/Reinforce-CartPole-v1
|
JYC333
| 2023-01-25T21:31:15Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-25T21:10:03Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 1000.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
gokuls/mobilebert_sa_GLUE_Experiment_rte_128
|
gokuls
| 2023-01-25T21:21:13Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-25T21:18:02Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: mobilebert_sa_GLUE_Experiment_rte_128
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
config: rte
split: validation
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.5270758122743683
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_rte_128
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6926
- Accuracy: 0.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6935 | 1.0 | 20 | 0.6926 | 0.5271 |
| 0.6934 | 2.0 | 40 | 0.6930 | 0.5271 |
| 0.6931 | 3.0 | 60 | 0.6932 | 0.4982 |
| 0.6932 | 4.0 | 80 | 0.6929 | 0.5343 |
| 0.6929 | 5.0 | 100 | 0.6945 | 0.4729 |
| 0.6921 | 6.0 | 120 | 0.6929 | 0.5199 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
huggingtweets/garyvee-weseleybeats-wise_chimp
|
huggingtweets
| 2023-01-25T21:13:55Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-01-25T20:57:04Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1346208413596921864/fGYV6EpP_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1493524673962852353/qRxbC9Xq_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1617635400624791571/D1GI8pze_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Wise Chimp & Gary Vaynerchuk & WeseleyBeats</div>
<div style="text-align: center; font-size: 14px;">@garyvee-weseleybeats-wise_chimp</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Wise Chimp & Gary Vaynerchuk & WeseleyBeats.
| Data | Wise Chimp | Gary Vaynerchuk | WeseleyBeats |
| --- | --- | --- | --- |
| Tweets downloaded | 3235 | 3248 | 2480 |
| Retweets | 20 | 599 | 157 |
| Short tweets | 42 | 899 | 385 |
| Tweets kept | 3173 | 1750 | 1938 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/tzaq6vpn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @garyvee-weseleybeats-wise_chimp's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/owdcta9r) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/owdcta9r/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/garyvee-weseleybeats-wise_chimp')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
emadsami/lk4
|
emadsami
| 2023-01-25T21:00:01Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-01-25T21:00:01Z |
---
license: bigscience-openrail-m
---
|
michal512/ppo-Huggy
|
michal512
| 2023-01-25T20:47:34Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-01-25T20:47:27Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: michal512/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
braedennorris/autotrain-enterprise_v_consumer-3052187265
|
braedennorris
| 2023-01-25T20:36:45Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"autotrain",
"en",
"dataset:braedennorris/autotrain-data-enterprise_v_consumer",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-25T03:19:47Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- braedennorris/autotrain-data-enterprise_v_consumer
co2_eq_emissions:
emissions: 1.1718652256627062
---
Enterprise = 1
Consumer = 0
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 3052187265
- CO2 Emissions (in grams): 1.1719
## Validation Metrics
- Loss: 0.428
- Accuracy: 0.824
- Precision: 0.805
- Recall: 0.896
- AUC: 0.891
- F1: 0.848
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/braedennorris/autotrain-enterprise_v_consumer-3052187265
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("braedennorris/autotrain-enterprise_v_consumer-3052187265", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("braedennorris/autotrain-enterprise_v_consumer-3052187265", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
gmojko/a2c-PandaReachDense-v2_v2
|
gmojko
| 2023-01-25T20:30:54Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-25T20:22:47Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -4.96 +/- 1.86
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gmojko/a2c-PandaReachDense-v2
|
gmojko
| 2023-01-25T20:26:15Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-25T16:46:28Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -4.68 +/- 1.22
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bonadio/Reinforce-PixelCopter-v1
|
bonadio
| 2023-01-25T19:44:01Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-25T15:38:54Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 50.90 +/- 42.58
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
gnieto/DRL_unit5_snowball_target
|
gnieto
| 2023-01-25T19:31:35Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-01-25T19:31:29Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: gnieto/DRL_unit5_snowball_target
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gnieto/DRL_Unit5_Pyramids
|
gnieto
| 2023-01-25T19:30:41Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-01-25T19:29:50Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: gnieto/DRL_Unit5_Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jed351/bart-zh-hk-wiki
|
jed351
| 2023-01-25T19:27:08Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"cantonese",
"fill-mask",
"yue",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-01-23T12:26:03Z |
---
language:
- yue
tags:
- bart
- cantonese
- fill-mask
license: other
---
# bart-base-cantonese
This is the Cantonese model of BART base. It is based on another model created by: https://huggingface.co/Ayaka/bart-base-cantonese
## Usage
```python
from transformers import BertTokenizer, BartForConditionalGeneration, Text2TextGenerationPipeline
tokenizer = BertTokenizer.from_pretrained('jed351/bart-zh-hk-wiki')
model = BartForConditionalGeneration.from_pretrained('jed351/bart-zh-hk-wiki')
text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
output = text2text_generator('聽日就要返香港,我激動到[MASK]唔着', max_length=50, do_sample=False)
print(output[0]['generated_text'].replace(' ', ''))
```
**Note**: Please use the `BertTokenizer` for the model vocabulary. DO NOT use the original `BartTokenizer`.
|
EMBO/sd-panelization-v2
|
EMBO
| 2023-01-25T19:26:15Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:source_data_nlp",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-10T10:27:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- source_data_nlp
metrics:
- precision
- recall
- f1
model-index:
- name: sd-panelization-v2
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: source_data_nlp
type: source_data_nlp
args: PANELIZATION
metrics:
- name: Precision
type: precision
value: 0.9134245120169964
- name: Recall
type: recall
value: 0.9494824016563147
- name: F1
type: f1
value: 0.9311044937736871
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sd-panelization-v2
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-large](https://huggingface.co/michiyasunaga/BioLinkBERT-large) on the source_data_nlp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0050
- Accuracy Score: 0.9982
- Precision: 0.9134
- Recall: 0.9495
- F1: 0.9311
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 256
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Score | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:---------:|:------:|:------:|
| 0.0048 | 1.0 | 431 | 0.0050 | 0.9982 | 0.9134 | 0.9495 | 0.9311 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0a0+bfe5ad2
- Datasets 1.17.0
- Tokenizers 0.12.1
|
michal512/ppo-LunarLander-v2
|
michal512
| 2023-01-25T19:05:01Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-25T18:55:52Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 266.40 +/- 22.88
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kadirnar/osnet_x0_5_imagenet
|
kadirnar
| 2023-01-25T18:59:51Z | 0 | 0 | null |
[
"object-detection",
"computer-vision",
"sort",
"tracker",
"osnet",
"arxiv:1905.00953",
"arxiv:1910.06827",
"arxiv:1910.10093",
"license:gpl-3.0",
"region:us"
] |
object-detection
| 2023-01-25T18:58:11Z |
---
license: gpl-3.0
tags:
- object-detection
- computer-vision
- sort
- tracker
- osnet
---
<div align="center">
<h1>
Torchreid-Pip: Packaged version of Torchreid
</h1>
<h4>
<img width="700" alt="teaser" src="https://raw.githubusercontent.com/goksenin-uav/torchreid-pip/main/doc/logo.png">
</h4>
</div>
This repo is a packaged version of the [Torchreid](https://github.com/KaiyangZhou/deep-person-reid) algorithm.
### Installation
```
pip install torchreid
```
### Model Description
[Learning Generalisable Omni-Scale Representations for Person Re-Identification](https://arxiv.org/abs/1905.00953):
[Omni-Scale Feature Learning for Person Re-Identification](https://arxiv.org/abs/1910.06827)
[Torchreid: A Library for Deep Learning Person Re-Identification in Pytorch](https://arxiv.org/abs/1910.10093)
### Overview
##### 1. Import ``torchreid``
```python
import torchreid
```
##### 2. Load data manager
```python
datamanager = torchreid.data.ImageDataManager(
root="reid-data",
sources="market1501",
targets="market1501",
height=256,
width=128,
batch_size_train=32,
batch_size_test=100,
transforms=["random_flip", "random_crop"]
)
```
##### 3 Build model, optimizer and lr_scheduler
```python
model = torchreid.models.build_model(
name="resnet50",
num_classes=datamanager.num_train_pids,
loss="softmax",
pretrained=True
)
model = model.cuda()
optimizer = torchreid.optim.build_optimizer(
model,
optim="adam",
lr=0.0003
)
scheduler = torchreid.optim.build_lr_scheduler(
optimizer,
lr_scheduler="single_step",
stepsize=20
)
```
##### 4. Build engine
```python
engine = torchreid.engine.ImageSoftmaxEngine(
datamanager,
model,
optimizer=optimizer,
scheduler=scheduler,
label_smooth=True
)
```
##### 5. Run training and test
```python
engine.run(
save_dir="log/resnet50",
max_epoch=60,
eval_freq=10,
print_freq=10,
test_only=False
)
```
Citation
---------
If you use this code or the models in your research, please give credit to the following papers:
```bibtex
@article{torchreid,
title={Torchreid: A Library for Deep Learning Person Re-Identification in Pytorch},
author={Zhou, Kaiyang and Xiang, Tao},
journal={arXiv preprint arXiv:1910.10093},
year={2019}
}
@inproceedings{zhou2019osnet,
title={Omni-Scale Feature Learning for Person Re-Identification},
author={Zhou, Kaiyang and Yang, Yongxin and Cavallaro, Andrea and Xiang, Tao},
booktitle={ICCV},
year={2019}
}
@article{zhou2021osnet,
title={Learning Generalisable Omni-Scale Representations for Person Re-Identification},
author={Zhou, Kaiyang and Yang, Yongxin and Cavallaro, Andrea and Xiang, Tao},
journal={TPAMI},
year={2021}
}
```
|
kadirnar/osnet_x1_0_imagenet
|
kadirnar
| 2023-01-25T18:59:45Z | 0 | 1 | null |
[
"object-detection",
"computer-vision",
"sort",
"tracker",
"osnet",
"arxiv:1905.00953",
"arxiv:1910.06827",
"arxiv:1910.10093",
"license:gpl-3.0",
"region:us"
] |
object-detection
| 2023-01-25T18:58:38Z |
---
license: gpl-3.0
tags:
- object-detection
- computer-vision
- sort
- tracker
- osnet
---
<div align="center">
<h1>
Torchreid-Pip: Packaged version of Torchreid
</h1>
<h4>
<img width="700" alt="teaser" src="https://raw.githubusercontent.com/goksenin-uav/torchreid-pip/main/doc/logo.png">
</h4>
</div>
This repo is a packaged version of the [Torchreid](https://github.com/KaiyangZhou/deep-person-reid) algorithm.
### Installation
```
pip install torchreid
```
### Model Description
[Learning Generalisable Omni-Scale Representations for Person Re-Identification](https://arxiv.org/abs/1905.00953):
[Omni-Scale Feature Learning for Person Re-Identification](https://arxiv.org/abs/1910.06827)
[Torchreid: A Library for Deep Learning Person Re-Identification in Pytorch](https://arxiv.org/abs/1910.10093)
### Overview
##### 1. Import ``torchreid``
```python
import torchreid
```
##### 2. Load data manager
```python
datamanager = torchreid.data.ImageDataManager(
root="reid-data",
sources="market1501",
targets="market1501",
height=256,
width=128,
batch_size_train=32,
batch_size_test=100,
transforms=["random_flip", "random_crop"]
)
```
##### 3 Build model, optimizer and lr_scheduler
```python
model = torchreid.models.build_model(
name="resnet50",
num_classes=datamanager.num_train_pids,
loss="softmax",
pretrained=True
)
model = model.cuda()
optimizer = torchreid.optim.build_optimizer(
model,
optim="adam",
lr=0.0003
)
scheduler = torchreid.optim.build_lr_scheduler(
optimizer,
lr_scheduler="single_step",
stepsize=20
)
```
##### 4. Build engine
```python
engine = torchreid.engine.ImageSoftmaxEngine(
datamanager,
model,
optimizer=optimizer,
scheduler=scheduler,
label_smooth=True
)
```
##### 5. Run training and test
```python
engine.run(
save_dir="log/resnet50",
max_epoch=60,
eval_freq=10,
print_freq=10,
test_only=False
)
```
Citation
---------
If you use this code or the models in your research, please give credit to the following papers:
```bibtex
@article{torchreid,
title={Torchreid: A Library for Deep Learning Person Re-Identification in Pytorch},
author={Zhou, Kaiyang and Xiang, Tao},
journal={arXiv preprint arXiv:1910.10093},
year={2019}
}
@inproceedings{zhou2019osnet,
title={Omni-Scale Feature Learning for Person Re-Identification},
author={Zhou, Kaiyang and Yang, Yongxin and Cavallaro, Andrea and Xiang, Tao},
booktitle={ICCV},
year={2019}
}
@article{zhou2021osnet,
title={Learning Generalisable Omni-Scale Representations for Person Re-Identification},
author={Zhou, Kaiyang and Yang, Yongxin and Cavallaro, Andrea and Xiang, Tao},
journal={TPAMI},
year={2021}
}
```
|
sd-concepts-library/geggin
|
sd-concepts-library
| 2023-01-25T18:59:03Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-01-25T18:59:00Z |
---
license: mit
---
### Geggin on Stable Diffusion
This is the `<geggin>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:










|
Brhnglc/ppo-SnowballTarget2
|
Brhnglc
| 2023-01-25T18:53:29Z | 13 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-01-25T18:53:23Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: Brhnglc/ppo-SnowballTarget2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kadirnar/strongsort
|
kadirnar
| 2023-01-25T18:49:17Z | 0 | 0 | null |
[
"object-detection",
"computer-vision",
"sort",
"tracker",
"strongsort",
"arxiv:2202.13514",
"license:gpl-3.0",
"region:us"
] |
object-detection
| 2023-01-25T18:49:08Z |
---
license: gpl-3.0
tags:
- object-detection
- computer-vision
- sort
- tracker
- strongsort
---
### Model Description
[StrongSort](https://arxiv.org/abs/2202.13514): Make DeepSORT Great Again
<img src="https://raw.githubusercontent.com/dyhBUPT/StrongSORT/master/assets/MOTA-IDF1-HOTA.png" width="1000"/>
### Installation
```
pip install strongsort
```
### Tracker
```python
from strong_sort import StrongSORT
tracker = StrongSORT(model_weights='model.pt', device='cuda')
pred = model(img)
for i, det in enumerate(pred):
det[i] = tracker[i].update(detection, im0s)
```
### BibTeX Entry and Citation Info
```
@article{du2022strongsort,
title={Strongsort: Make deepsort great again},
author={Du, Yunhao and Song, Yang and Yang, Bo and Zhao, Yanyun},
journal={arXiv preprint arXiv:2202.13514},
year={2022}
}
```
|
emreisik/news
|
emreisik
| 2023-01-25T18:48:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-01-25T18:47:42Z |
This is the reporsitory of Turkish fake news dataset which consists of Zaytung posts and Hurriyet news articles.
Code folder contains the web scrapper python files.
Raw folder contains txt files downloaded from sources.
Clean folder contains txt files in lowercase, punctuation and numbers removed.
|
JoshuaRubin/bert-base-uncased-finetuned-math_punctuation-25-01-two_linear_layers-frozen_bert
|
JoshuaRubin
| 2023-01-25T18:11:17Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-01-25T13:08:35Z |
---
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-math_punctuation-25-01-two_linear_layers-frozen_bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-math_punctuation-25-01-two_linear_layers-frozen_bert
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2150
- Micro f1: 0.8910
- Macro f1: 0.2672
- Weighted f1: 0.8495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Micro f1 | Macro f1 | Weighted f1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|
| 0.193 | 0.62 | 500 | 0.2146 | 0.8937 | 0.2360 | 0.8435 |
| 0.1936 | 1.23 | 1000 | 0.2130 | 0.8937 | 0.2360 | 0.8435 |
| 0.1924 | 1.85 | 1500 | 0.2119 | 0.8937 | 0.2361 | 0.8435 |
| 0.1911 | 2.47 | 2000 | 0.2128 | 0.8936 | 0.2369 | 0.8437 |
| 0.1909 | 3.09 | 2500 | 0.2114 | 0.8937 | 0.2369 | 0.8437 |
| 0.1904 | 3.7 | 3000 | 0.2137 | 0.8935 | 0.2407 | 0.8445 |
| 0.1935 | 4.32 | 3500 | 0.2138 | 0.8934 | 0.2469 | 0.8458 |
| 0.1874 | 4.94 | 4000 | 0.2118 | 0.8929 | 0.2561 | 0.8479 |
| 0.1908 | 5.56 | 4500 | 0.2134 | 0.8925 | 0.2588 | 0.8483 |
| 0.1877 | 6.17 | 5000 | 0.2135 | 0.8918 | 0.2628 | 0.8490 |
| 0.1881 | 6.79 | 5500 | 0.2133 | 0.8931 | 0.2554 | 0.8478 |
| 0.1902 | 7.41 | 6000 | 0.2137 | 0.8922 | 0.2603 | 0.8485 |
| 0.1883 | 8.02 | 6500 | 0.2155 | 0.8914 | 0.2655 | 0.8493 |
| 0.19 | 8.64 | 7000 | 0.2154 | 0.8914 | 0.2647 | 0.8490 |
| 0.1881 | 9.26 | 7500 | 0.2149 | 0.8915 | 0.2645 | 0.8492 |
| 0.1876 | 9.88 | 8000 | 0.2141 | 0.8911 | 0.2671 | 0.8496 |
| 0.1879 | 10.49 | 8500 | 0.2155 | 0.8897 | 0.2722 | 0.8501 |
| 0.1897 | 11.11 | 9000 | 0.2156 | 0.8910 | 0.2670 | 0.8494 |
| 0.1883 | 11.73 | 9500 | 0.2150 | 0.8910 | 0.2672 | 0.8495 |
### Framework versions
- Transformers 4.25.1
- Pytorch 2.0.0.dev20230111
- Datasets 2.8.0
- Tokenizers 0.13.2
|
twigs/bigbird-pegasus-large
|
twigs
| 2023-01-25T16:54:17Z | 7 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bigbird_pegasus",
"text2text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-01-23T15:11:37Z |
---
language:
- en
---
BigBirdPegasus weights before finetuning as per [this](https://github.com/google-research/bigbird) repo.
Converted to PyTorch as per [this](https://github.com/huggingface/transformers/blob/v4.25.1/src/transformers/models/bigbird_pegasus/convert_bigbird_pegasus_tf_to_pytorch.py) script.
|
kadirnar/ByteTracker
|
kadirnar
| 2023-01-25T16:50:36Z | 0 | 1 | null |
[
"object-detection",
"computer-vision",
"sort",
"tracker",
"bytetracker",
"arxiv:2110.06864",
"license:mit",
"region:us"
] |
object-detection
| 2023-01-25T16:40:01Z |
---
license: mit
tags:
- object-detection
- computer-vision
- sort
- tracker
- bytetracker
---
### Model Description
[ByteTrack](https://arxiv.org/abs/2110.06864): Multi-Object Tracking by Associating Every Detection Box
<img src="https://raw.githubusercontent.com/ifzhang/ByteTrack/main/assets/sota.png" width="500"/>
### Installation
```
pip install bytetracker
```
### Tracker
```python
from bytetracker import BYTETracker
tracker = BYTETracker(args)
for image in images:
dets = detector(image)
online_targets = tracker.update(dets)
```
### BibTeX Entry and Citation Info
```
@article{zhang2022bytetrack,
title={ByteTrack: Multi-Object Tracking by Associating Every Detection Box},
author={Zhang, Yifu and Sun, Peize and Jiang, Yi and Yu, Dongdong and Weng, Fucheng and Yuan, Zehuan and Luo, Ping and Liu, Wenyu and Wang, Xinggang},
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
year={2022}
}
```
|
Constien/NewModel
|
Constien
| 2023-01-25T16:49:37Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-25T16:43:30Z |
---
tags:
- generated_from_trainer
model-index:
- name: NewModel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NewModel
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.0+cu116
- Tokenizers 0.13.2
|
segmentation-fault/stable-diffusion-something-sfw
|
segmentation-fault
| 2023-01-25T16:47:40Z | 0 | 0 | null |
[
"art",
"anime",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-01-25T16:24:39Z |
---
license: creativeml-openrail-m
tags:
- art
- anime
- stable-diffusion
---
|
johnt/bert_ft_sentence
|
johnt
| 2023-01-25T16:47:38Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-01-25T16:45:31Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2813 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 281,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
gokuls/mobilebert_sa_GLUE_Experiment_wnli_256
|
gokuls
| 2023-01-25T16:27:28Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-25T16:26:05Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: mobilebert_sa_GLUE_Experiment_wnli_256
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
config: wnli
split: validation
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_wnli_256
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6899
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6942 | 1.0 | 5 | 0.6899 | 0.5634 |
| 0.6935 | 2.0 | 10 | 0.6920 | 0.5634 |
| 0.6933 | 3.0 | 15 | 0.6930 | 0.5634 |
| 0.693 | 4.0 | 20 | 0.6921 | 0.5634 |
| 0.693 | 5.0 | 25 | 0.6912 | 0.5634 |
| 0.693 | 6.0 | 30 | 0.6909 | 0.5634 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
gokuls/mobilebert_sa_GLUE_Experiment_sst2_256
|
gokuls
| 2023-01-25T16:18:55Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-25T15:30:50Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: mobilebert_sa_GLUE_Experiment_sst2_256
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.801605504587156
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_sst2_256
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4333
- Accuracy: 0.8016
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4969 | 1.0 | 527 | 0.4333 | 0.8016 |
| 0.2781 | 2.0 | 1054 | 0.4999 | 0.7833 |
| 0.2274 | 3.0 | 1581 | 0.4782 | 0.7924 |
| 0.2 | 4.0 | 2108 | 0.5582 | 0.7936 |
| 0.1835 | 5.0 | 2635 | 0.4967 | 0.7913 |
| 0.1708 | 6.0 | 3162 | 0.5061 | 0.7856 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
kostasang/a2c-PandaReachDense-v2
|
kostasang
| 2023-01-25T16:02:36Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-25T14:26:31Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.77 +/- 0.58
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LarryAIDraw/corneo_marin_kitagawa
|
LarryAIDraw
| 2023-01-25T15:38:34Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-01-25T15:38:00Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/5251/corneos-marin-kitagawa-ti-embedding
|
LarryAIDraw/corneo_covering_breasts_arms_crossed
|
LarryAIDraw
| 2023-01-25T15:36:44Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-01-25T15:36:17Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/5241/corneos-covering-breasts-ti-embed-arms-crossed-version
|
threite/distilbert-base-uncased-finetuned-imdb
|
threite
| 2023-01-25T15:32:43Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-01-25T15:08:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7028 | 1.0 | 157 | 0.6567 |
| 0.679 | 2.0 | 314 | 0.6515 |
| 0.6692 | 3.0 | 471 | 0.6563 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1
- Datasets 2.7.1
- Tokenizers 0.13.1
|
LarryAIDraw/corneo_covering_breasts_one_arm
|
LarryAIDraw
| 2023-01-25T15:32:30Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-01-25T15:31:13Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/5203/corneos-covering-breasts-ti-embed-one-arm-version
|
gokuls/mobilebert_sa_GLUE_Experiment_rte_256
|
gokuls
| 2023-01-25T15:30:08Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-25T15:26:28Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: mobilebert_sa_GLUE_Experiment_rte_256
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
config: rte
split: validation
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.5270758122743683
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_rte_256
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6927
- Accuracy: 0.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6937 | 1.0 | 20 | 0.6927 | 0.5271 |
| 0.6936 | 2.0 | 40 | 0.6929 | 0.5307 |
| 0.693 | 3.0 | 60 | 0.6930 | 0.5018 |
| 0.693 | 4.0 | 80 | 0.6934 | 0.4874 |
| 0.6927 | 5.0 | 100 | 0.6947 | 0.4585 |
| 0.6909 | 6.0 | 120 | 0.6942 | 0.5126 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
UKP-SQuARE/Extractive_MetaQA
|
UKP-SQuARE
| 2023-01-25T15:26:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"arxiv:2112.01922",
"endpoints_compatible",
"region:us"
] | null | 2023-01-25T15:19:38Z |
datasets:
- squad
- newsqa
- hotpot_qa
- biu-nlp/qamr
- search_qa
- natural_questions
- trivia_qa
- duorc
language:
- en
metrics:
- squad
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Checkpoint of MetaQA trained only on extractive QA datasets from MetaQA: Combining Expert Agents for Multi-Skill Question Answering (https://arxiv.org/abs/2112.01922)
## Evaluation Results
```
{
"SQuAD": {
"exact_match": 86.73139158576052,
"f1": 92.65156746563402
},
"NewsQA": {
"exact_match": 55.84045584045584,
"f1": 71.73547617592037
},
"HotpotQA": {
"exact_match": 64.8135593220339,
"f1": 79.61023604916922
},
"SearchQA": {
"exact_match": 75.04122497055359,
"f1": 81.37280639135817
},
"NaturalQuestionsShort": {
"exact_match": 69.50763477718915,
"f1": 81.30374741690376
},
"TriviaQA-web": {
"exact_match": 77.18396711202466,
"f1": 81.52989853015538
},
"QAMR": {
"exact_match": 72.07531203723292,
"f1": 83.9068616637681
},
"DuoRC": {
"exact_match": 39.35626573106552,
"f1": 51.033295034422466
}
}
```
|
gokuls/mobilebert_sa_GLUE_Experiment_qqp_256
|
gokuls
| 2023-01-25T15:25:59Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-25T06:47:19Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: mobilebert_sa_GLUE_Experiment_qqp_256
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
config: qqp
split: validation
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.7976007914914668
- name: F1
type: f1
value: 0.7297109826589595
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_qqp_256
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4349
- Accuracy: 0.7976
- F1: 0.7297
- Combined Score: 0.7637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.526 | 1.0 | 2843 | 0.5088 | 0.7492 | 0.6674 | 0.7083 |
| 0.4762 | 2.0 | 5686 | 0.4782 | 0.7695 | 0.6583 | 0.7139 |
| 0.4438 | 3.0 | 8529 | 0.4532 | 0.7847 | 0.6829 | 0.7338 |
| 0.4161 | 4.0 | 11372 | 0.4602 | 0.7869 | 0.7135 | 0.7502 |
| 0.3968 | 5.0 | 14215 | 0.4395 | 0.7955 | 0.7212 | 0.7583 |
| 0.3815 | 6.0 | 17058 | 0.4392 | 0.7985 | 0.7190 | 0.7587 |
| 0.3659 | 7.0 | 19901 | 0.4349 | 0.7976 | 0.7297 | 0.7637 |
| 0.352 | 8.0 | 22744 | 0.4419 | 0.8005 | 0.7300 | 0.7652 |
| 0.3399 | 9.0 | 25587 | 0.4454 | 0.7998 | 0.7317 | 0.7658 |
| 0.327 | 10.0 | 28430 | 0.4614 | 0.7995 | 0.7359 | 0.7677 |
| 0.3157 | 11.0 | 31273 | 0.4733 | 0.8000 | 0.7246 | 0.7623 |
| 0.3041 | 12.0 | 34116 | 0.4738 | 0.8041 | 0.7283 | 0.7662 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
bonadio/Reinforce-Cartpole-v1
|
bonadio
| 2023-01-25T15:12:11Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-25T15:11:56Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
morganjeffries/Reinforce-CartPole-v1
|
morganjeffries
| 2023-01-25T15:03:57Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-25T15:03:47Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
aj555/ppo-SnowballTarget
|
aj555
| 2023-01-25T14:45:03Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-01-25T14:44:57Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: aj555/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.