modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-26 06:27:36
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-26 06:26:46
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
srivatsavaasista/textgenerator
|
srivatsavaasista
| 2022-08-04T05:40:30Z | 28 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-27T09:12:36Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: textgenerator
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# textgenerator
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.4579
- Validation Loss: 6.4893
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 398, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 7.5475 | 6.4893 | 0 |
| 6.4577 | 6.4893 | 1 |
| 6.4579 | 6.4893 | 2 |
### Framework versions
- Transformers 4.21.0
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
keepitreal/mini-phobert-v2
|
keepitreal
| 2022-08-04T04:42:30Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-03T20:07:20Z |
---
tags:
- generated_from_trainer
model-index:
- name: mini-phobert-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mini-phobert-v2
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.3293
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
oMateos2020/pegasus-newsroom-cnn_full-adafactor-bs6
|
oMateos2020
| 2022-08-04T03:55:37Z | 15 | 2 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-01T11:22:51Z |
---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: pegasus-newsroom-cnn_full-adafactor-bs6
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 44.1026
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-newsroom-cnn_full-adafactor-bs6
This model is a fine-tuned version of [oMateos2020/pegasus-newsroom-cnn_full-adafactor-bs6](https://huggingface.co/oMateos2020/pegasus-newsroom-cnn_full-adafactor-bs6) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8671
- Rouge1: 44.1026
- Rouge2: 21.4261
- Rougel: 31.2033
- Rougelsum: 41.0324
- Gen Len: 72.0839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.4e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.9343 | 0.5 | 560 | 2.8733 | 44.1226 | 21.4087 | 31.2431 | 41.0683 | 69.367 |
| 2.9855 | 1.0 | 1120 | 2.8671 | 44.1026 | 21.4261 | 31.2033 | 41.0324 | 72.0839 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
jjjjjjjjjj/q-FrozenLake-v1-4x4-noSlippery
|
jjjjjjjjjj
| 2022-08-04T03:15:05Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-04T03:13:43Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="jjjjjjjjjj/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
yashwantk/distilbert-base-cased-distilled-squad-finetuned-squad
|
yashwantk
| 2022-08-04T02:42:07Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2_yash",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-02T10:29:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2_yash
model-index:
- name: distilbert-base-cased-distilled-squad-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased-distilled-squad-finetuned-squad
This model is a fine-tuned version of [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad) on the squad_v2_yash dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 198 | 0.7576 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Mateopablo/Futur
|
Mateopablo
| 2022-08-04T02:27:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-08-04T02:26:46Z |
Mateo Martínez, argentinian
license: afl-3.0
---
|
jerryw/my_bert-base-cased
|
jerryw
| 2022-08-04T01:38:04Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-04T01:34:19Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: my_bert-base-cased
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my_bert-base-cased
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.9.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
carted-nlp/categorization-finetuned-20220721-164940-pruned-20220803-184651
|
carted-nlp
| 2022-08-04T00:11:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-03T18:49:03Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: categorization-finetuned-20220721-164940-pruned-20220803-184651
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# categorization-finetuned-20220721-164940-pruned-20220803-184651
This model is a fine-tuned version of [carted-nlp/categorization-finetuned-20220721-164940](https://huggingface.co/carted-nlp/categorization-finetuned-20220721-164940) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4673
- Accuracy: 0.8760
- F1: 0.8751
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-06
- train_batch_size: 48
- eval_batch_size: 48
- seed: 314
- gradient_accumulation_steps: 6
- total_train_batch_size: 288
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.3404 | 0.51 | 2000 | 0.4329 | 0.8872 | 0.8865 |
| 0.3433 | 1.01 | 4000 | 0.4280 | 0.8883 | 0.8876 |
| 0.3281 | 1.52 | 6000 | 0.4302 | 0.8890 | 0.8883 |
| 0.331 | 2.02 | 8000 | 0.4265 | 0.8891 | 0.8885 |
| 0.3224 | 2.53 | 10000 | 0.4300 | 0.8881 | 0.8874 |
| 0.3361 | 3.04 | 12000 | 0.4291 | 0.8889 | 0.8882 |
| 0.3323 | 3.54 | 14000 | 0.4337 | 0.8878 | 0.8871 |
| 0.3556 | 4.05 | 16000 | 0.4345 | 0.8857 | 0.8851 |
| 0.3663 | 4.56 | 18000 | 0.4417 | 0.8836 | 0.8828 |
| 0.3902 | 5.06 | 20000 | 0.4555 | 0.8789 | 0.8781 |
| 0.4036 | 5.57 | 22000 | 0.4556 | 0.8788 | 0.8779 |
| 0.4305 | 6.07 | 24000 | 0.4697 | 0.8751 | 0.8742 |
| 0.4501 | 6.58 | 26000 | 0.4763 | 0.8738 | 0.8725 |
| 0.4733 | 7.09 | 28000 | 0.4857 | 0.8710 | 0.8700 |
| 0.4851 | 7.59 | 30000 | 0.4863 | 0.8705 | 0.8695 |
| 0.4846 | 8.1 | 32000 | 0.4849 | 0.8708 | 0.8698 |
| 0.4856 | 8.61 | 34000 | 0.4835 | 0.8707 | 0.8695 |
| 0.4774 | 9.11 | 36000 | 0.4797 | 0.8719 | 0.8708 |
| 0.4635 | 9.62 | 38000 | 0.4776 | 0.8728 | 0.8717 |
| 0.4561 | 10.12 | 40000 | 0.4746 | 0.8739 | 0.8729 |
| 0.4475 | 10.63 | 42000 | 0.4705 | 0.8749 | 0.8740 |
| 0.4413 | 11.14 | 44000 | 0.4691 | 0.8754 | 0.8744 |
| 0.4389 | 11.64 | 46000 | 0.4679 | 0.8760 | 0.8750 |
| 0.4361 | 12.15 | 48000 | 0.4677 | 0.8759 | 0.8749 |
| 0.4362 | 12.65 | 50000 | 0.4672 | 0.8763 | 0.8753 |
| 0.4309 | 13.16 | 52000 | 0.4671 | 0.8761 | 0.8751 |
| 0.4316 | 13.67 | 54000 | 0.4670 | 0.8764 | 0.8754 |
| 0.4321 | 14.17 | 56000 | 0.4668 | 0.8764 | 0.8755 |
| 0.4311 | 14.68 | 58000 | 0.4668 | 0.8764 | 0.8754 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 2.3.2
- Tokenizers 0.11.6
|
mrm8488/dqn-EnduroNoFrameskip-v4
|
mrm8488
| 2022-08-03T23:23:24Z | 8 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"EnduroNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-03T23:19:07Z |
---
library_name: stable-baselines3
tags:
- EnduroNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 553.80 +/- 125.50
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: EnduroNoFrameskip-v4
type: EnduroNoFrameskip-v4
---
# **DQN** Agent playing **EnduroNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **EnduroNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env EnduroNoFrameskip-v4 -orga mrm8488 -f logs/
python enjoy.py --algo dqn --env EnduroNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env EnduroNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env EnduroNoFrameskip-v4 -f logs/ -orga mrm8488
```
## Hyperparameters
```python
OrderedDict([('batch_size', 512),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 600000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
huggingtweets/elonmusk-srinithyananda
|
huggingtweets
| 2022-08-03T22:27:35Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-03T22:27:29Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529956155937759233/Nyn1HZWF_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1157286539036020737/5TQyrkEw_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & KAILASA's SPH Nithyananda</div>
<div style="text-align: center; font-size: 14px;">@elonmusk-srinithyananda</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & KAILASA's SPH Nithyananda.
| Data | Elon Musk | KAILASA's SPH Nithyananda |
| --- | --- | --- |
| Tweets downloaded | 3200 | 3250 |
| Retweets | 128 | 6 |
| Short tweets | 982 | 523 |
| Tweets kept | 2090 | 2721 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2y3fe7dn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-srinithyananda's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/gywjziih) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/gywjziih/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk-srinithyananda')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
khabiri/test_keras_model_elham
|
khabiri
| 2022-08-03T22:23:45Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2022-08-03T22:23:36Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| learning_rate | 0.0010000000474974513 |
| decay | 0.0 |
| beta_1 | 0.8999999761581421 |
| beta_2 | 0.9990000128746033 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
huggingtweets/elonmusk-srinithyananda-yeshuaissavior
|
huggingtweets
| 2022-08-03T22:10:12Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-03T21:57:09Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1552061223864127488/Y-7S0UTB_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529956155937759233/Nyn1HZWF_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1157286539036020737/5TQyrkEw_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Feather of the One & Elon Musk & KAILASA's SPH Nithyananda</div>
<div style="text-align: center; font-size: 14px;">@elonmusk-srinithyananda-yeshuaissavior</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Feather of the One & Elon Musk & KAILASA's SPH Nithyananda.
| Data | Feather of the One | Elon Musk | KAILASA's SPH Nithyananda |
| --- | --- | --- | --- |
| Tweets downloaded | 505 | 3200 | 3250 |
| Retweets | 29 | 128 | 6 |
| Short tweets | 175 | 982 | 523 |
| Tweets kept | 301 | 2090 | 2721 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1wthdqz7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-srinithyananda-yeshuaissavior's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/18cn8xoz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/18cn8xoz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk-srinithyananda-yeshuaissavior')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
RayS2022/q-Taxi-v3
|
RayS2022
| 2022-08-03T20:58:23Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-03T20:58:15Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="RayS2022/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
SharpAI/mal-tls-bert-base-relu-w1q8
|
SharpAI
| 2022-08-03T19:37:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-03T19:37:23Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: mal_tls-bert-base-relu-w1q8
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mal_tls-bert-base-relu-w1q8
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.10.3
|
yasnunsal/distilbert-base-uncased-finetuned-emotion
|
yasnunsal
| 2022-08-03T18:32:09Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-03T15:08:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
BenWord/autotrain-APMv2Multiclass-1216046004
|
BenWord
| 2022-08-03T18:06:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:BenWord/autotrain-data-APMv2Multiclass",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-03T18:03:06Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- BenWord/autotrain-data-APMv2Multiclass
co2_eq_emissions:
emissions: 2.4364900803769225
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1216046004
- CO2 Emissions (in grams): 2.4365
## Validation Metrics
- Loss: 0.094
- Accuracy: 1.000
- Macro F1: 1.000
- Micro F1: 1.000
- Weighted F1: 1.000
- Macro Precision: 1.000
- Micro Precision: 1.000
- Weighted Precision: 1.000
- Macro Recall: 1.000
- Micro Recall: 1.000
- Weighted Recall: 1.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/BenWord/autotrain-APMv2Multiclass-1216046004
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("BenWord/autotrain-APMv2Multiclass-1216046004", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("BenWord/autotrain-APMv2Multiclass-1216046004", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
NitishKarra/layoutlmv3-finetuned-wildreceipt
|
NitishKarra
| 2022-08-03T17:44:41Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:wildreceipt",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-03T16:06:42Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- wildreceipt
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-wildreceipt
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wildreceipt
type: wildreceipt
config: WildReceipt
split: train
args: WildReceipt
metrics:
- name: Precision
type: precision
value: 0.8693453601202679
- name: Recall
type: recall
value: 0.8753268198706481
- name: F1
type: f1
value: 0.872325836533187
- name: Accuracy
type: accuracy
value: 0.9240429965997587
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-wildreceipt
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the wildreceipt dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3154
- Precision: 0.8693
- Recall: 0.8753
- F1: 0.8723
- Accuracy: 0.9240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.32 | 100 | 1.3618 | 0.6375 | 0.3049 | 0.4125 | 0.6708 |
| No log | 0.63 | 200 | 0.9129 | 0.6662 | 0.4897 | 0.5645 | 0.7631 |
| No log | 0.95 | 300 | 0.6800 | 0.7273 | 0.6375 | 0.6795 | 0.8274 |
| No log | 1.26 | 400 | 0.5733 | 0.7579 | 0.6926 | 0.7238 | 0.8501 |
| 1.0638 | 1.58 | 500 | 0.5015 | 0.7854 | 0.7383 | 0.7611 | 0.8667 |
| 1.0638 | 1.89 | 600 | 0.4501 | 0.7916 | 0.7680 | 0.7796 | 0.8770 |
| 1.0638 | 2.21 | 700 | 0.4145 | 0.8177 | 0.8053 | 0.8114 | 0.8917 |
| 1.0638 | 2.52 | 800 | 0.3835 | 0.8214 | 0.8210 | 0.8212 | 0.8961 |
| 1.0638 | 2.84 | 900 | 0.3666 | 0.8251 | 0.8338 | 0.8294 | 0.9009 |
| 0.423 | 3.15 | 1000 | 0.3524 | 0.8485 | 0.8217 | 0.8349 | 0.9026 |
| 0.423 | 3.47 | 1100 | 0.3451 | 0.8430 | 0.8327 | 0.8378 | 0.9060 |
| 0.423 | 3.79 | 1200 | 0.3348 | 0.8347 | 0.8504 | 0.8425 | 0.9092 |
| 0.423 | 4.1 | 1300 | 0.3331 | 0.8368 | 0.8448 | 0.8408 | 0.9079 |
| 0.423 | 4.42 | 1400 | 0.3163 | 0.8503 | 0.8519 | 0.8511 | 0.9138 |
| 0.2822 | 4.73 | 1500 | 0.3168 | 0.8531 | 0.8504 | 0.8518 | 0.9133 |
| 0.2822 | 5.05 | 1600 | 0.3013 | 0.8629 | 0.8577 | 0.8603 | 0.9183 |
| 0.2822 | 5.36 | 1700 | 0.3146 | 0.8619 | 0.8528 | 0.8573 | 0.9160 |
| 0.2822 | 5.68 | 1800 | 0.3121 | 0.8474 | 0.8638 | 0.8555 | 0.9159 |
| 0.2822 | 5.99 | 1900 | 0.3054 | 0.8537 | 0.8667 | 0.8601 | 0.9166 |
| 0.2176 | 6.31 | 2000 | 0.3127 | 0.8556 | 0.8592 | 0.8574 | 0.9167 |
| 0.2176 | 6.62 | 2100 | 0.3072 | 0.8568 | 0.8667 | 0.8617 | 0.9194 |
| 0.2176 | 6.94 | 2200 | 0.2989 | 0.8617 | 0.8660 | 0.8638 | 0.9209 |
| 0.2176 | 7.26 | 2300 | 0.2997 | 0.8616 | 0.8682 | 0.8649 | 0.9199 |
| 0.2176 | 7.57 | 2400 | 0.3100 | 0.8538 | 0.8689 | 0.8613 | 0.9191 |
| 0.1777 | 7.89 | 2500 | 0.3022 | 0.8649 | 0.8695 | 0.8672 | 0.9228 |
| 0.1777 | 8.2 | 2600 | 0.2990 | 0.8631 | 0.8740 | 0.8685 | 0.9224 |
| 0.1777 | 8.52 | 2700 | 0.3072 | 0.8669 | 0.8697 | 0.8683 | 0.9228 |
| 0.1777 | 8.83 | 2800 | 0.3038 | 0.8689 | 0.8720 | 0.8705 | 0.9238 |
| 0.1777 | 9.15 | 2900 | 0.3138 | 0.8726 | 0.8673 | 0.8700 | 0.9216 |
| 0.1434 | 9.46 | 3000 | 0.3150 | 0.8610 | 0.8740 | 0.8674 | 0.9221 |
| 0.1434 | 9.78 | 3100 | 0.3055 | 0.8674 | 0.8742 | 0.8708 | 0.9239 |
| 0.1434 | 10.09 | 3200 | 0.3182 | 0.8618 | 0.8724 | 0.8671 | 0.9215 |
| 0.1434 | 10.41 | 3300 | 0.3175 | 0.8633 | 0.8727 | 0.8680 | 0.9223 |
| 0.1434 | 10.73 | 3400 | 0.3146 | 0.8685 | 0.8717 | 0.8701 | 0.9234 |
| 0.1282 | 11.04 | 3500 | 0.3136 | 0.8671 | 0.8757 | 0.8714 | 0.9233 |
| 0.1282 | 11.36 | 3600 | 0.3186 | 0.8679 | 0.8734 | 0.8706 | 0.9225 |
| 0.1282 | 11.67 | 3700 | 0.3147 | 0.8701 | 0.8745 | 0.8723 | 0.9238 |
| 0.1282 | 11.99 | 3800 | 0.3159 | 0.8705 | 0.8759 | 0.8732 | 0.9244 |
| 0.1282 | 12.3 | 3900 | 0.3147 | 0.8699 | 0.8748 | 0.8723 | 0.9246 |
| 0.1121 | 12.62 | 4000 | 0.3154 | 0.8693 | 0.8753 | 0.8723 | 0.9240 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
MayaGalvez/bert-base-multilingual-cased-finetuned-nli
|
MayaGalvez
| 2022-08-03T16:48:33Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:xnli",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-03T11:58:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xnli
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-finetuned-nli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: xnli
type: xnli
config: en
split: train
args: en
metrics:
- name: Accuracy
type: accuracy
value: 0.8156626506024096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-nli
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the xnli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4681
- Accuracy: 0.8157
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.9299 | 0.02 | 200 | 0.8468 | 0.6277 |
| 0.7967 | 0.03 | 400 | 0.7425 | 0.6855 |
| 0.7497 | 0.05 | 600 | 0.7116 | 0.6924 |
| 0.7083 | 0.07 | 800 | 0.6868 | 0.7153 |
| 0.6882 | 0.08 | 1000 | 0.6638 | 0.7289 |
| 0.6944 | 0.1 | 1200 | 0.6476 | 0.7361 |
| 0.6682 | 0.11 | 1400 | 0.6364 | 0.7458 |
| 0.6635 | 0.13 | 1600 | 0.6592 | 0.7337 |
| 0.6423 | 0.15 | 1800 | 0.6120 | 0.7510 |
| 0.6196 | 0.16 | 2000 | 0.5990 | 0.7582 |
| 0.6381 | 0.18 | 2200 | 0.6026 | 0.7538 |
| 0.6276 | 0.2 | 2400 | 0.6054 | 0.7598 |
| 0.6248 | 0.21 | 2600 | 0.6368 | 0.7526 |
| 0.6331 | 0.23 | 2800 | 0.5959 | 0.7655 |
| 0.6142 | 0.24 | 3000 | 0.6117 | 0.7554 |
| 0.6124 | 0.26 | 3200 | 0.6221 | 0.7570 |
| 0.6127 | 0.28 | 3400 | 0.5748 | 0.7695 |
| 0.602 | 0.29 | 3600 | 0.5735 | 0.7598 |
| 0.5923 | 0.31 | 3800 | 0.5609 | 0.7723 |
| 0.5827 | 0.33 | 4000 | 0.5635 | 0.7743 |
| 0.5732 | 0.34 | 4200 | 0.5547 | 0.7771 |
| 0.5757 | 0.36 | 4400 | 0.5629 | 0.7739 |
| 0.5736 | 0.37 | 4600 | 0.5680 | 0.7659 |
| 0.5642 | 0.39 | 4800 | 0.5437 | 0.7871 |
| 0.5763 | 0.41 | 5000 | 0.5589 | 0.7807 |
| 0.5713 | 0.42 | 5200 | 0.5355 | 0.7867 |
| 0.5644 | 0.44 | 5400 | 0.5346 | 0.7888 |
| 0.5727 | 0.46 | 5600 | 0.5519 | 0.7815 |
| 0.5539 | 0.47 | 5800 | 0.5219 | 0.7900 |
| 0.5516 | 0.49 | 6000 | 0.5560 | 0.7795 |
| 0.5539 | 0.51 | 6200 | 0.5544 | 0.7847 |
| 0.5693 | 0.52 | 6400 | 0.5322 | 0.7932 |
| 0.5632 | 0.54 | 6600 | 0.5404 | 0.7936 |
| 0.565 | 0.55 | 6800 | 0.5382 | 0.7880 |
| 0.5555 | 0.57 | 7000 | 0.5364 | 0.7920 |
| 0.5329 | 0.59 | 7200 | 0.5177 | 0.7964 |
| 0.54 | 0.6 | 7400 | 0.5286 | 0.7916 |
| 0.554 | 0.62 | 7600 | 0.5401 | 0.7835 |
| 0.5447 | 0.64 | 7800 | 0.5261 | 0.7876 |
| 0.5438 | 0.65 | 8000 | 0.5032 | 0.8020 |
| 0.5505 | 0.67 | 8200 | 0.5220 | 0.7924 |
| 0.5364 | 0.68 | 8400 | 0.5398 | 0.7876 |
| 0.5317 | 0.7 | 8600 | 0.5310 | 0.7944 |
| 0.5361 | 0.72 | 8800 | 0.5297 | 0.7936 |
| 0.5204 | 0.73 | 9000 | 0.5270 | 0.7940 |
| 0.5189 | 0.75 | 9200 | 0.5193 | 0.7964 |
| 0.5348 | 0.77 | 9400 | 0.5270 | 0.7867 |
| 0.5363 | 0.78 | 9600 | 0.5194 | 0.7924 |
| 0.5184 | 0.8 | 9800 | 0.5298 | 0.7888 |
| 0.5072 | 0.81 | 10000 | 0.4999 | 0.7992 |
| 0.5229 | 0.83 | 10200 | 0.4922 | 0.8108 |
| 0.5201 | 0.85 | 10400 | 0.5019 | 0.7920 |
| 0.5304 | 0.86 | 10600 | 0.4959 | 0.7992 |
| 0.5061 | 0.88 | 10800 | 0.5047 | 0.7980 |
| 0.5291 | 0.9 | 11000 | 0.4974 | 0.8068 |
| 0.5099 | 0.91 | 11200 | 0.4988 | 0.8036 |
| 0.5271 | 0.93 | 11400 | 0.4899 | 0.8028 |
| 0.5211 | 0.95 | 11600 | 0.4866 | 0.8092 |
| 0.4977 | 0.96 | 11800 | 0.5059 | 0.7960 |
| 0.5155 | 0.98 | 12000 | 0.4821 | 0.8084 |
| 0.5061 | 0.99 | 12200 | 0.4763 | 0.8116 |
| 0.4607 | 1.01 | 12400 | 0.5245 | 0.8020 |
| 0.4435 | 1.03 | 12600 | 0.5021 | 0.8032 |
| 0.4289 | 1.04 | 12800 | 0.5219 | 0.8060 |
| 0.4227 | 1.06 | 13000 | 0.5119 | 0.8076 |
| 0.4349 | 1.08 | 13200 | 0.4957 | 0.8104 |
| 0.4331 | 1.09 | 13400 | 0.4914 | 0.8129 |
| 0.4269 | 1.11 | 13600 | 0.4785 | 0.8145 |
| 0.4185 | 1.12 | 13800 | 0.4879 | 0.8161 |
| 0.4244 | 1.14 | 14000 | 0.4834 | 0.8149 |
| 0.4016 | 1.16 | 14200 | 0.5084 | 0.8056 |
| 0.4106 | 1.17 | 14400 | 0.4993 | 0.8052 |
| 0.4345 | 1.19 | 14600 | 0.5029 | 0.8124 |
| 0.4162 | 1.21 | 14800 | 0.4841 | 0.8120 |
| 0.4239 | 1.22 | 15000 | 0.4756 | 0.8189 |
| 0.4215 | 1.24 | 15200 | 0.4957 | 0.8088 |
| 0.4157 | 1.25 | 15400 | 0.4845 | 0.8112 |
| 0.3982 | 1.27 | 15600 | 0.5064 | 0.8048 |
| 0.4056 | 1.29 | 15800 | 0.4827 | 0.8241 |
| 0.4105 | 1.3 | 16000 | 0.4936 | 0.8088 |
| 0.4221 | 1.32 | 16200 | 0.4800 | 0.8129 |
| 0.4029 | 1.34 | 16400 | 0.4790 | 0.8181 |
| 0.4346 | 1.35 | 16600 | 0.4802 | 0.8137 |
| 0.4163 | 1.37 | 16800 | 0.4838 | 0.8213 |
| 0.4106 | 1.39 | 17000 | 0.4905 | 0.8209 |
| 0.4071 | 1.4 | 17200 | 0.4889 | 0.8153 |
| 0.4077 | 1.42 | 17400 | 0.4801 | 0.8165 |
| 0.4074 | 1.43 | 17600 | 0.4765 | 0.8217 |
| 0.4095 | 1.45 | 17800 | 0.4942 | 0.8096 |
| 0.4117 | 1.47 | 18000 | 0.4668 | 0.8225 |
| 0.3991 | 1.48 | 18200 | 0.4814 | 0.8161 |
| 0.4114 | 1.5 | 18400 | 0.4757 | 0.8193 |
| 0.4061 | 1.52 | 18600 | 0.4702 | 0.8209 |
| 0.4104 | 1.53 | 18800 | 0.4814 | 0.8149 |
| 0.3997 | 1.55 | 19000 | 0.4833 | 0.8141 |
| 0.3992 | 1.56 | 19200 | 0.4847 | 0.8169 |
| 0.4021 | 1.58 | 19400 | 0.4893 | 0.8189 |
| 0.4284 | 1.6 | 19600 | 0.4806 | 0.8173 |
| 0.3915 | 1.61 | 19800 | 0.4952 | 0.8092 |
| 0.4122 | 1.63 | 20000 | 0.4917 | 0.8112 |
| 0.4164 | 1.65 | 20200 | 0.4769 | 0.8157 |
| 0.4063 | 1.66 | 20400 | 0.4723 | 0.8141 |
| 0.4087 | 1.68 | 20600 | 0.4701 | 0.8157 |
| 0.4159 | 1.69 | 20800 | 0.4826 | 0.8141 |
| 0.4 | 1.71 | 21000 | 0.4760 | 0.8133 |
| 0.4024 | 1.73 | 21200 | 0.4755 | 0.8161 |
| 0.4201 | 1.74 | 21400 | 0.4728 | 0.8173 |
| 0.4066 | 1.76 | 21600 | 0.4690 | 0.8157 |
| 0.3941 | 1.78 | 21800 | 0.4687 | 0.8181 |
| 0.3987 | 1.79 | 22000 | 0.4735 | 0.8149 |
| 0.4074 | 1.81 | 22200 | 0.4715 | 0.8137 |
| 0.4083 | 1.83 | 22400 | 0.4660 | 0.8181 |
| 0.4107 | 1.84 | 22600 | 0.4699 | 0.8161 |
| 0.3924 | 1.86 | 22800 | 0.4732 | 0.8153 |
| 0.4205 | 1.87 | 23000 | 0.4686 | 0.8177 |
| 0.3962 | 1.89 | 23200 | 0.4688 | 0.8177 |
| 0.3888 | 1.91 | 23400 | 0.4778 | 0.8124 |
| 0.3978 | 1.92 | 23600 | 0.4713 | 0.8145 |
| 0.3963 | 1.94 | 23800 | 0.4704 | 0.8145 |
| 0.408 | 1.96 | 24000 | 0.4674 | 0.8165 |
| 0.4014 | 1.97 | 24200 | 0.4679 | 0.8161 |
| 0.3951 | 1.99 | 24400 | 0.4681 | 0.8157 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
bhaskar75/ddpm-butterflies-128
|
bhaskar75
| 2022-08-03T15:55:42Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-03T15:08:41Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/bhaskar75/ddpm-butterflies-128/tensorboard?#scalars)
|
dminiotas05/distilbert-base-uncased-finetuned-ft1500_norm500
|
dminiotas05
| 2022-08-03T14:50:40Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-03T13:53:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ft1500_norm500
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ft1500_norm500
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8852
- Mse: 2.9505
- Mae: 1.0272
- R2: 0.4233
- Accuracy: 0.4914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:--------:|
| 0.62 | 1.0 | 3122 | 0.8853 | 2.9511 | 1.0392 | 0.4232 | 0.4830 |
| 0.5042 | 2.0 | 6244 | 0.8695 | 2.8984 | 1.0347 | 0.4335 | 0.4651 |
| 0.309 | 3.0 | 9366 | 0.8852 | 2.9505 | 1.0272 | 0.4233 | 0.4914 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
DOOGLAK/wikigold_trained_no_DA
|
DOOGLAK
| 2022-08-03T14:33:52Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:wikigold_splits",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-03T14:25:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikigold_splits
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: temp
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikigold_splits
type: wikigold_splits
args: default
metrics:
- name: Precision
type: precision
value: 0.8517110266159695
- name: Recall
type: recall
value: 0.875
- name: F1
type: f1
value: 0.8631984585741811
- name: Accuracy
type: accuracy
value: 0.9607367910809501
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# temp
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the wikigold_splits dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1322
- Precision: 0.8517
- Recall: 0.875
- F1: 0.8632
- Accuracy: 0.9607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 167 | 0.1490 | 0.7583 | 0.7760 | 0.7671 | 0.9472 |
| No log | 2.0 | 334 | 0.1337 | 0.8519 | 0.8464 | 0.8491 | 0.9572 |
| 0.1569 | 3.0 | 501 | 0.1322 | 0.8517 | 0.875 | 0.8632 | 0.9607 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
elopezlopez/distilbert-base-uncased_fold_10_binary_v1
|
elopezlopez
| 2022-08-03T14:29:32Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-03T11:51:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_10_binary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_10_binary_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6912
- F1: 0.7977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 288 | 0.4002 | 0.8012 |
| 0.4056 | 2.0 | 576 | 0.4372 | 0.8075 |
| 0.4056 | 3.0 | 864 | 0.4720 | 0.8071 |
| 0.1958 | 4.0 | 1152 | 0.8156 | 0.7980 |
| 0.1958 | 5.0 | 1440 | 0.8633 | 0.8055 |
| 0.0847 | 6.0 | 1728 | 0.9761 | 0.8041 |
| 0.0356 | 7.0 | 2016 | 1.1816 | 0.7861 |
| 0.0356 | 8.0 | 2304 | 1.2251 | 0.7918 |
| 0.0215 | 9.0 | 2592 | 1.3423 | 0.7798 |
| 0.0215 | 10.0 | 2880 | 1.3888 | 0.7913 |
| 0.013 | 11.0 | 3168 | 1.2899 | 0.8040 |
| 0.013 | 12.0 | 3456 | 1.4247 | 0.8051 |
| 0.0049 | 13.0 | 3744 | 1.5436 | 0.7991 |
| 0.0061 | 14.0 | 4032 | 1.5762 | 0.7991 |
| 0.0061 | 15.0 | 4320 | 1.5461 | 0.7998 |
| 0.0054 | 16.0 | 4608 | 1.5622 | 0.8018 |
| 0.0054 | 17.0 | 4896 | 1.6658 | 0.7991 |
| 0.0021 | 18.0 | 5184 | 1.6765 | 0.7972 |
| 0.0021 | 19.0 | 5472 | 1.6864 | 0.7973 |
| 0.0052 | 20.0 | 5760 | 1.6303 | 0.8030 |
| 0.0029 | 21.0 | 6048 | 1.6631 | 0.7947 |
| 0.0029 | 22.0 | 6336 | 1.6571 | 0.8006 |
| 0.0027 | 23.0 | 6624 | 1.6729 | 0.7949 |
| 0.0027 | 24.0 | 6912 | 1.6931 | 0.7934 |
| 0.0001 | 25.0 | 7200 | 1.6912 | 0.7977 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_9_binary_v1
|
elopezlopez
| 2022-08-03T14:14:40Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-03T11:37:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_9_binary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_9_binary_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6965
- F1: 0.8090
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 291 | 0.4193 | 0.7989 |
| 0.3993 | 2.0 | 582 | 0.4039 | 0.8026 |
| 0.3993 | 3.0 | 873 | 0.5227 | 0.7995 |
| 0.2044 | 4.0 | 1164 | 0.7264 | 0.8011 |
| 0.2044 | 5.0 | 1455 | 0.8497 | 0.8007 |
| 0.0882 | 6.0 | 1746 | 0.9543 | 0.8055 |
| 0.0374 | 7.0 | 2037 | 1.1349 | 0.7997 |
| 0.0374 | 8.0 | 2328 | 1.3175 | 0.8009 |
| 0.0151 | 9.0 | 2619 | 1.3585 | 0.8030 |
| 0.0151 | 10.0 | 2910 | 1.4202 | 0.8067 |
| 0.0068 | 11.0 | 3201 | 1.4364 | 0.8108 |
| 0.0068 | 12.0 | 3492 | 1.4443 | 0.8088 |
| 0.0096 | 13.0 | 3783 | 1.5308 | 0.8075 |
| 0.0031 | 14.0 | 4074 | 1.5061 | 0.8020 |
| 0.0031 | 15.0 | 4365 | 1.5769 | 0.7980 |
| 0.0048 | 16.0 | 4656 | 1.5962 | 0.8038 |
| 0.0048 | 17.0 | 4947 | 1.5383 | 0.8085 |
| 0.0067 | 18.0 | 5238 | 1.5456 | 0.8158 |
| 0.0062 | 19.0 | 5529 | 1.6325 | 0.8044 |
| 0.0062 | 20.0 | 5820 | 1.5430 | 0.8141 |
| 0.0029 | 21.0 | 6111 | 1.6590 | 0.8117 |
| 0.0029 | 22.0 | 6402 | 1.6650 | 0.8112 |
| 0.0017 | 23.0 | 6693 | 1.7016 | 0.8053 |
| 0.0017 | 24.0 | 6984 | 1.6998 | 0.8090 |
| 0.0011 | 25.0 | 7275 | 1.6965 | 0.8090 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_8_binary_v1
|
elopezlopez
| 2022-08-03T13:59:34Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-03T11:22:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_8_binary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_8_binary_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6283
- F1: 0.8178
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 290 | 0.4038 | 0.7981 |
| 0.409 | 2.0 | 580 | 0.4023 | 0.8176 |
| 0.409 | 3.0 | 870 | 0.5245 | 0.8169 |
| 0.1938 | 4.0 | 1160 | 0.6242 | 0.8298 |
| 0.1938 | 5.0 | 1450 | 0.8432 | 0.8159 |
| 0.0848 | 6.0 | 1740 | 1.0887 | 0.8015 |
| 0.038 | 7.0 | 2030 | 1.0700 | 0.8167 |
| 0.038 | 8.0 | 2320 | 1.0970 | 0.8241 |
| 0.0159 | 9.0 | 2610 | 1.2474 | 0.8142 |
| 0.0159 | 10.0 | 2900 | 1.3453 | 0.8184 |
| 0.01 | 11.0 | 3190 | 1.4412 | 0.8147 |
| 0.01 | 12.0 | 3480 | 1.4263 | 0.8181 |
| 0.007 | 13.0 | 3770 | 1.3859 | 0.8258 |
| 0.0092 | 14.0 | 4060 | 1.4633 | 0.8128 |
| 0.0092 | 15.0 | 4350 | 1.4304 | 0.8206 |
| 0.0096 | 16.0 | 4640 | 1.5081 | 0.8149 |
| 0.0096 | 17.0 | 4930 | 1.5239 | 0.8189 |
| 0.0047 | 18.0 | 5220 | 1.5268 | 0.8151 |
| 0.0053 | 19.0 | 5510 | 1.5445 | 0.8173 |
| 0.0053 | 20.0 | 5800 | 1.6051 | 0.8180 |
| 0.0014 | 21.0 | 6090 | 1.5981 | 0.8211 |
| 0.0014 | 22.0 | 6380 | 1.5957 | 0.8225 |
| 0.001 | 23.0 | 6670 | 1.5838 | 0.8189 |
| 0.001 | 24.0 | 6960 | 1.6301 | 0.8178 |
| 0.0018 | 25.0 | 7250 | 1.6283 | 0.8178 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
dminiotas05/distilbert-base-uncased-finetuned-ft1500_unnorm
|
dminiotas05
| 2022-08-03T12:56:08Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-03T12:24:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ft1500_unnorm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ft1500_unnorm
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0557
- Mse: 205571.2188
- Mae: 74.8054
- R2: 0.0463
- Accuracy: 0.0090
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:-------:|:------:|:--------:|
| 1.2054 | 1.0 | 3122 | 2.0557 | 205571.2188 | 74.8054 | 0.0463 | 0.0090 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
bhavesh/arinfo_sample_dataset_finaltffwjv58-model-classification
|
bhavesh
| 2022-08-03T12:40:45Z | 0 | 0 |
sklearn
|
[
"sklearn",
"tabular-classification",
"baseline-trainer",
"license:apache-2.0",
"region:us"
] |
tabular-classification
| 2022-08-03T12:40:39Z |
---
license: apache-2.0
library_name: sklearn
tags:
- tabular-classification
- baseline-trainer
---
## Baseline Model trained on arinfo_sample_dataset_finaltffwjv58 to apply classification on model
**Metrics of the best model:**
accuracy 0.930688
recall_macro 0.655991
precision_macro 0.640972
f1_macro 0.638021
Name: DecisionTreeClassifier(class_weight='balanced', max_depth=2249), dtype: float64
**See model plot below:**
<style>#sk-container-id-4 {color: black;background-color: white;}#sk-container-id-4 pre{padding: 0;}#sk-container-id-4 div.sk-toggleable {background-color: white;}#sk-container-id-4 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-4 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-4 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-4 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-4 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-4 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-4 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-4 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-4 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-4 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-4 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-4 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-4 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-4 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-4 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-4 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-4 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-4 div.sk-item {position: relative;z-index: 1;}#sk-container-id-4 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-4 div.sk-item::before, #sk-container-id-4 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-4 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-4 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-4 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-4 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-4 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-4 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-4 div.sk-label-container {text-align: center;}#sk-container-id-4 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-4 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-4" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
rto False False False ... False True False
ownerNum False False False ... False False False
cc False False False ... False False False
insurance False False False ... False False False
weight True False False ... False False False
financer False False False ... False True False
fu...
class False False False ... False False False
state False False False ... False False False
year False False False ... False False False
categoryId False False False ... False False False
onroadPrice True False False ... False False False
price_FAIR True False False ... False False False[13 rows x 7 columns])),('decisiontreeclassifier',DecisionTreeClassifier(class_weight='balanced',max_depth=2249))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-10" type="checkbox" ><label for="sk-estimator-id-10" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
rto False False False ... False True False
ownerNum False False False ... False False False
cc False False False ... False False False
insurance False False False ... False False False
weight True False False ... False False False
financer False False False ... False True False
fu...
class False False False ... False False False
state False False False ... False False False
year False False False ... False False False
categoryId False False False ... False False False
onroadPrice True False False ... False False False
price_FAIR True False False ... False False False[13 rows x 7 columns])),('decisiontreeclassifier',DecisionTreeClassifier(class_weight='balanced',max_depth=2249))])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-11" type="checkbox" ><label for="sk-estimator-id-11" class="sk-toggleable__label sk-toggleable__label-arrow">EasyPreprocessor</label><div class="sk-toggleable__content"><pre>EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
rto False False False ... False True False
ownerNum False False False ... False False False
cc False False False ... False False False
insurance False False False ... False False False
weight True False False ... False False False
financer False False False ... False True False
fuelType False False False ... False False False
class False False False ... False False False
state False False False ... False False False
year False False False ... False False False
categoryId False False False ... False False False
onroadPrice True False False ... False False False
price_FAIR True False False ... False False False[13 rows x 7 columns])</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-12" type="checkbox" ><label for="sk-estimator-id-12" class="sk-toggleable__label sk-toggleable__label-arrow">DecisionTreeClassifier</label><div class="sk-toggleable__content"><pre>DecisionTreeClassifier(class_weight='balanced', max_depth=2249)</pre></div></div></div></div></div></div></div>
**Disclaimer:** This model is trained with dabl library as a baseline, for better results, use [AutoTrain](https://huggingface.co/autotrain).
**Logs of training** including the models tried in the process can be found in logs.txt
|
Rocketknight1/distilbert-base-uncased-finetuned-cola
|
Rocketknight1
| 2022-08-03T12:13:22Z | 7 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Rocketknight1/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Rocketknight1/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3182
- Validation Loss: 0.4914
- Train Matthews Correlation: 0.5056
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5126 | 0.4638 | 0.4555 | 0 |
| 0.3182 | 0.4914 | 0.5056 | 1 |
### Framework versions
- Transformers 4.22.0.dev0
- TensorFlow 2.9.1
- Datasets 2.4.1.dev0
- Tokenizers 0.11.0
|
masapasa/is_cat
|
masapasa
| 2022-08-03T10:57:18Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2022-08-03T10:53:01Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
SlavaC/bert-fine-tuned-cola
|
SlavaC
| 2022-08-03T10:47:51Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-03T10:12:13Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bert-fine-tuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2861
- Validation Loss: 0.4212
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.4878 | 0.4234 | 0 |
| 0.2861 | 0.4212 | 1 |
### Framework versions
- Transformers 4.21.0
- TensorFlow 2.7.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
spacestar1705/Reinforce-PixelCopter-PLE-v0
|
spacestar1705
| 2022-08-03T09:30:13Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-02T12:45:24Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter-PLE-v0
results:
- metrics:
- type: mean_reward
value: 10.60 +/- 9.50
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
SyedArsal/roberta-urdu-small-finetuned-news
|
SyedArsal
| 2022-08-03T09:13:02Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"multiple-choice",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-07-29T08:04:18Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-urdu-small-finetuned-news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-urdu-small-finetuned-news
This model is a fine-tuned version of [urduhack/roberta-urdu-small](https://huggingface.co/urduhack/roberta-urdu-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2702
- Accuracy: 0.9482
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5949 | 1.0 | 938 | 0.3626 | 0.9029 |
| 0.1351 | 2.0 | 1876 | 0.2545 | 0.9389 |
| 0.0281 | 3.0 | 2814 | 0.2702 | 0.9482 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
kws/dqn-SpaceInvadersNoFrameskip-v4
|
kws
| 2022-08-03T07:43:27Z | 8 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-03T07:42:45Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 603.00 +/- 194.90
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kws -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga kws
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
BekirTaha/dqn-SpaceInvadersNoFrameskip-v4
|
BekirTaha
| 2022-08-03T07:41:26Z | 8 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-02T13:34:41Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 577.50 +/- 116.86
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga BekirTaha -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga BekirTaha
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
NimaBoscarino/July25Test
|
NimaBoscarino
| 2022-08-03T07:20:01Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-07-26T02:54:10Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# NimaBoscarino/July25Test
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('NimaBoscarino/July25Test')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('NimaBoscarino/July25Test')
model = AutoModel.from_pretrained('NimaBoscarino/July25Test')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=NimaBoscarino/July25Test)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
msms/deberta-v3-base-squad2-finetuned-squad
|
msms
| 2022-08-03T06:25:28Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"deberta-v2",
"question-answering",
"generated_from_keras_callback",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-02T11:28:16Z |
---
license: cc-by-4.0
tags:
- generated_from_keras_callback
model-index:
- name: msms/deberta-v3-base-squad2-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# msms/deberta-v3-base-squad2-finetuned-squad
This model is a fine-tuned version of [deepset/deberta-v3-base-squad2](https://huggingface.co/deepset/deberta-v3-base-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7266
- Validation Loss: 4.5755
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1533, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.3334 | 3.8035 | 0 |
| 0.7266 | 4.5755 | 1 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
wooihen/xlm-roberta-base-finetuned-panx-de
|
wooihen
| 2022-08-03T02:12:37Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-12T07:47:47Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
elopezlopez/distilbert-base-uncased_fold_5_binary_v1
|
elopezlopez
| 2022-08-02T23:02:16Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-02T22:48:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_5_binary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_5_binary_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6980
- F1: 0.8110
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 288 | 0.4412 | 0.7981 |
| 0.396 | 2.0 | 576 | 0.4419 | 0.8078 |
| 0.396 | 3.0 | 864 | 0.4955 | 0.8166 |
| 0.2019 | 4.0 | 1152 | 0.6341 | 0.8075 |
| 0.2019 | 5.0 | 1440 | 1.0351 | 0.7979 |
| 0.0808 | 6.0 | 1728 | 1.1818 | 0.7844 |
| 0.0315 | 7.0 | 2016 | 1.2530 | 0.8051 |
| 0.0315 | 8.0 | 2304 | 1.3568 | 0.7937 |
| 0.0143 | 9.0 | 2592 | 1.4009 | 0.8045 |
| 0.0143 | 10.0 | 2880 | 1.5333 | 0.7941 |
| 0.0066 | 11.0 | 3168 | 1.5242 | 0.7982 |
| 0.0066 | 12.0 | 3456 | 1.5752 | 0.8050 |
| 0.0091 | 13.0 | 3744 | 1.5199 | 0.8046 |
| 0.0111 | 14.0 | 4032 | 1.5319 | 0.8117 |
| 0.0111 | 15.0 | 4320 | 1.5333 | 0.8156 |
| 0.0072 | 16.0 | 4608 | 1.5461 | 0.8192 |
| 0.0072 | 17.0 | 4896 | 1.5288 | 0.8252 |
| 0.0048 | 18.0 | 5184 | 1.5725 | 0.8078 |
| 0.0048 | 19.0 | 5472 | 1.5896 | 0.8138 |
| 0.0032 | 20.0 | 5760 | 1.6917 | 0.8071 |
| 0.0028 | 21.0 | 6048 | 1.6608 | 0.8109 |
| 0.0028 | 22.0 | 6336 | 1.7013 | 0.8122 |
| 0.0029 | 23.0 | 6624 | 1.6769 | 0.8148 |
| 0.0029 | 24.0 | 6912 | 1.6906 | 0.8100 |
| 0.0006 | 25.0 | 7200 | 1.6980 | 0.8110 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_2_binary_v1
|
elopezlopez
| 2022-08-02T22:17:49Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-02T22:03:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_2_binary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_2_binary_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8833
- F1: 0.7841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 290 | 0.4060 | 0.8070 |
| 0.3981 | 2.0 | 580 | 0.4534 | 0.8072 |
| 0.3981 | 3.0 | 870 | 0.5460 | 0.7961 |
| 0.1985 | 4.0 | 1160 | 0.8684 | 0.7818 |
| 0.1985 | 5.0 | 1450 | 0.9009 | 0.7873 |
| 0.0844 | 6.0 | 1740 | 1.1529 | 0.7825 |
| 0.0329 | 7.0 | 2030 | 1.3185 | 0.7850 |
| 0.0329 | 8.0 | 2320 | 1.4110 | 0.7862 |
| 0.0109 | 9.0 | 2610 | 1.4751 | 0.7784 |
| 0.0109 | 10.0 | 2900 | 1.6276 | 0.7723 |
| 0.0071 | 11.0 | 3190 | 1.6779 | 0.7861 |
| 0.0071 | 12.0 | 3480 | 1.6258 | 0.7850 |
| 0.0041 | 13.0 | 3770 | 1.6324 | 0.7903 |
| 0.0109 | 14.0 | 4060 | 1.7563 | 0.7932 |
| 0.0109 | 15.0 | 4350 | 1.6740 | 0.7906 |
| 0.0079 | 16.0 | 4640 | 1.7468 | 0.7944 |
| 0.0079 | 17.0 | 4930 | 1.7095 | 0.7879 |
| 0.0067 | 18.0 | 5220 | 1.7293 | 0.7912 |
| 0.0021 | 19.0 | 5510 | 1.7875 | 0.7848 |
| 0.0021 | 20.0 | 5800 | 1.7462 | 0.7906 |
| 0.0026 | 21.0 | 6090 | 1.8549 | 0.7815 |
| 0.0026 | 22.0 | 6380 | 1.8314 | 0.7860 |
| 0.0021 | 23.0 | 6670 | 1.8577 | 0.7839 |
| 0.0021 | 24.0 | 6960 | 1.8548 | 0.7883 |
| 0.0001 | 25.0 | 7250 | 1.8833 | 0.7841 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
sumba/covid-twitter-bert-v2-no_description-stance-loss-hyp-unprocess
|
sumba
| 2022-08-02T21:49:07Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-02T17:16:02Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: covid-twitter-bert-v2-no_description-stance-loss-hyp-unprocess
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid-twitter-bert-v2-no_description-stance-loss-hyp-unprocess
This model is a fine-tuned version of [digitalepidemiologylab/covid-twitter-bert-v2](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5162
- Accuracy: 0.0862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.4275469935864394e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8058 | 1.0 | 632 | 0.5946 | 0.1411 |
| 0.5512 | 2.0 | 1264 | 0.5162 | 0.0862 |
| 0.4049 | 3.0 | 1896 | 0.6612 | 0.0470 |
| 0.1756 | 4.0 | 2528 | 0.7155 | 0.0426 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
aujer/autotrain-not_interested_1-1213145894
|
aujer
| 2022-08-02T21:27:19Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain",
"en",
"dataset:aujer/autotrain-data-not_interested_1",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-02T21:26:07Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- aujer/autotrain-data-not_interested_1
co2_eq_emissions:
emissions: 1.5489539045493725
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1213145894
- CO2 Emissions (in grams): 1.5490
## Validation Metrics
- Loss: 0.904
- Accuracy: 0.735
- Macro F1: 0.566
- Micro F1: 0.735
- Weighted F1: 0.715
- Macro Precision: 0.566
- Micro Precision: 0.735
- Weighted Precision: 0.714
- Macro Recall: 0.583
- Micro Recall: 0.735
- Weighted Recall: 0.735
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/aujer/autotrain-not_interested_1-1213145894
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("aujer/autotrain-not_interested_1-1213145894", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("aujer/autotrain-not_interested_1-1213145894", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
aujer/autotrain-not_interested_2-1213045881
|
aujer
| 2022-08-02T21:15:40Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:aujer/autotrain-data-not_interested_2",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-02T21:14:05Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- aujer/autotrain-data-not_interested_2
co2_eq_emissions:
emissions: 1.695519133475222
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1213045881
- CO2 Emissions (in grams): 1.6955
## Validation Metrics
- Loss: 1.607
- Accuracy: 0.535
- Macro F1: 0.306
- Micro F1: 0.535
- Weighted F1: 0.440
- Macro Precision: 0.346
- Micro Precision: 0.535
- Weighted Precision: 0.435
- Macro Recall: 0.345
- Micro Recall: 0.535
- Weighted Recall: 0.535
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/aujer/autotrain-not_interested_2-1213045881
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("aujer/autotrain-not_interested_2-1213045881", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("aujer/autotrain-not_interested_2-1213045881", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
srcocotero/tiny-bert-qa
|
srcocotero
| 2022-08-02T19:58:09Z | 6 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-07-27T19:12:14Z |
---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: mini_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mini_model
This model is a fine-tuned version of [nreimers/BERT-Tiny_L-2_H-128_A-2](https://huggingface.co/nreimers/BERT-Tiny_L-2_H-128_A-2) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 5
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Rifky/indobert-hoax-classification
|
Rifky
| 2022-08-02T19:32:31Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-02T16:42:51Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: indobert-hoax-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indobert-hoax-classification
This model is a fine-tuned version of [indobenchmark/indobert-base-p1](https://huggingface.co/indobenchmark/indobert-base-p1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6230
- Accuracy: 0.8059
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.2173070213315e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 30
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 85 | 0.5540 | 0.7029 |
| No log | 2.0 | 170 | 0.5432 | 0.7029 |
| No log | 3.0 | 255 | 0.4963 | 0.7441 |
| No log | 4.0 | 340 | 0.5791 | 0.7971 |
| No log | 5.0 | 425 | 0.6230 | 0.8059 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
liujxing/pegassus-samsum
|
liujxing
| 2022-08-02T19:03:10Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-01T14:37:11Z |
---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegassus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegassus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5463
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7619 | 0.54 | 500 | 1.5463 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.11.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
QuickSilver007/a2c-AntBulletEnv-v0
|
QuickSilver007
| 2022-08-02T18:56:18Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-02T18:55:13Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 1488.76 +/- 155.40
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
elopezlopez/distilbert-base-uncased_fold_9_ternary_v1
|
elopezlopez
| 2022-08-02T18:08:13Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-02T17:54:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_9_ternary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_9_ternary_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9406
- F1: 0.7841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 292 | 0.5684 | 0.7635 |
| 0.5656 | 2.0 | 584 | 0.5753 | 0.7725 |
| 0.5656 | 3.0 | 876 | 0.6159 | 0.7866 |
| 0.2499 | 4.0 | 1168 | 0.7743 | 0.7828 |
| 0.2499 | 5.0 | 1460 | 0.9820 | 0.7674 |
| 0.1153 | 6.0 | 1752 | 1.2383 | 0.7738 |
| 0.0547 | 7.0 | 2044 | 1.2468 | 0.7815 |
| 0.0547 | 8.0 | 2336 | 1.3480 | 0.7622 |
| 0.0233 | 9.0 | 2628 | 1.3791 | 0.7892 |
| 0.0233 | 10.0 | 2920 | 1.4344 | 0.7841 |
| 0.0142 | 11.0 | 3212 | 1.4958 | 0.7802 |
| 0.0087 | 12.0 | 3504 | 1.5714 | 0.7674 |
| 0.0087 | 13.0 | 3796 | 1.6129 | 0.7956 |
| 0.0111 | 14.0 | 4088 | 1.7799 | 0.7751 |
| 0.0111 | 15.0 | 4380 | 1.7272 | 0.7789 |
| 0.0055 | 16.0 | 4672 | 1.7696 | 0.7866 |
| 0.0055 | 17.0 | 4964 | 1.8622 | 0.7789 |
| 0.003 | 18.0 | 5256 | 1.8563 | 0.7802 |
| 0.0004 | 19.0 | 5548 | 1.8993 | 0.7815 |
| 0.0004 | 20.0 | 5840 | 1.9199 | 0.7853 |
| 0.0005 | 21.0 | 6132 | 1.9003 | 0.7879 |
| 0.0005 | 22.0 | 6424 | 1.9161 | 0.7828 |
| 0.0011 | 23.0 | 6716 | 1.9691 | 0.7815 |
| 0.0017 | 24.0 | 7008 | 1.9492 | 0.7841 |
| 0.0017 | 25.0 | 7300 | 1.9406 | 0.7841 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_8_ternary_v1
|
elopezlopez
| 2022-08-02T17:53:47Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-02T17:40:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_8_ternary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_8_ternary_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8474
- F1: 0.8022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 289 | 0.5398 | 0.7838 |
| 0.5509 | 2.0 | 578 | 0.6062 | 0.7703 |
| 0.5509 | 3.0 | 867 | 0.6563 | 0.7666 |
| 0.2366 | 4.0 | 1156 | 0.7688 | 0.7961 |
| 0.2366 | 5.0 | 1445 | 1.0968 | 0.7690 |
| 0.1247 | 6.0 | 1734 | 1.1414 | 0.7924 |
| 0.0482 | 7.0 | 2023 | 1.2159 | 0.7875 |
| 0.0482 | 8.0 | 2312 | 1.2703 | 0.7887 |
| 0.0245 | 9.0 | 2601 | 1.3401 | 0.7985 |
| 0.0245 | 10.0 | 2890 | 1.4645 | 0.7961 |
| 0.0149 | 11.0 | 3179 | 1.5632 | 0.7801 |
| 0.0149 | 12.0 | 3468 | 1.5249 | 0.7875 |
| 0.0124 | 13.0 | 3757 | 1.6263 | 0.7948 |
| 0.0038 | 14.0 | 4046 | 1.8059 | 0.7764 |
| 0.0038 | 15.0 | 4335 | 1.7649 | 0.7776 |
| 0.0061 | 16.0 | 4624 | 1.8293 | 0.7850 |
| 0.0061 | 17.0 | 4913 | 1.8316 | 0.7887 |
| 0.0022 | 18.0 | 5202 | 1.7628 | 0.7973 |
| 0.0022 | 19.0 | 5491 | 1.8763 | 0.7862 |
| 0.002 | 20.0 | 5780 | 1.8409 | 0.7899 |
| 0.0026 | 21.0 | 6069 | 1.8146 | 0.8022 |
| 0.0026 | 22.0 | 6358 | 1.8420 | 0.7973 |
| 0.0008 | 23.0 | 6647 | 1.8683 | 0.8010 |
| 0.0008 | 24.0 | 6936 | 1.8571 | 0.8010 |
| 0.0015 | 25.0 | 7225 | 1.8474 | 0.8022 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_6_ternary_v1
|
elopezlopez
| 2022-08-02T17:25:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-02T17:11:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_6_ternary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_6_ternary_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9031
- F1: 0.7910
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 292 | 0.5235 | 0.7769 |
| 0.566 | 2.0 | 584 | 0.5268 | 0.7923 |
| 0.566 | 3.0 | 876 | 0.6189 | 0.7756 |
| 0.2514 | 4.0 | 1168 | 0.7777 | 0.8026 |
| 0.2514 | 5.0 | 1460 | 0.9380 | 0.7936 |
| 0.1175 | 6.0 | 1752 | 1.0957 | 0.7872 |
| 0.0579 | 7.0 | 2044 | 1.2370 | 0.7923 |
| 0.0579 | 8.0 | 2336 | 1.3739 | 0.7936 |
| 0.0259 | 9.0 | 2628 | 1.3457 | 0.7846 |
| 0.0259 | 10.0 | 2920 | 1.4938 | 0.7872 |
| 0.0125 | 11.0 | 3212 | 1.5921 | 0.7885 |
| 0.0108 | 12.0 | 3504 | 1.6504 | 0.7897 |
| 0.0108 | 13.0 | 3796 | 1.7532 | 0.7756 |
| 0.007 | 14.0 | 4088 | 1.7029 | 0.7821 |
| 0.007 | 15.0 | 4380 | 1.7632 | 0.7987 |
| 0.0067 | 16.0 | 4672 | 1.7084 | 0.7962 |
| 0.0067 | 17.0 | 4964 | 1.7559 | 0.7962 |
| 0.0072 | 18.0 | 5256 | 1.8431 | 0.7987 |
| 0.0028 | 19.0 | 5548 | 1.8689 | 0.7846 |
| 0.0028 | 20.0 | 5840 | 1.8641 | 0.7885 |
| 0.0033 | 21.0 | 6132 | 1.8578 | 0.7923 |
| 0.0033 | 22.0 | 6424 | 1.9071 | 0.7833 |
| 0.003 | 23.0 | 6716 | 1.8959 | 0.7872 |
| 0.0011 | 24.0 | 7008 | 1.9073 | 0.7987 |
| 0.0011 | 25.0 | 7300 | 1.9031 | 0.7910 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_5_ternary_v1
|
elopezlopez
| 2022-08-02T17:10:39Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-02T16:56:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_5_ternary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_5_ternary_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1368
- F1: 0.7682
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 291 | 0.6423 | 0.7465 |
| 0.5563 | 2.0 | 582 | 0.6001 | 0.7631 |
| 0.5563 | 3.0 | 873 | 0.6884 | 0.7785 |
| 0.2595 | 4.0 | 1164 | 0.9920 | 0.7439 |
| 0.2595 | 5.0 | 1455 | 1.1434 | 0.7631 |
| 0.1159 | 6.0 | 1746 | 1.3289 | 0.7606 |
| 0.0473 | 7.0 | 2037 | 1.3966 | 0.7708 |
| 0.0473 | 8.0 | 2328 | 1.4761 | 0.7606 |
| 0.0282 | 9.0 | 2619 | 1.6144 | 0.7542 |
| 0.0282 | 10.0 | 2910 | 1.5642 | 0.7695 |
| 0.0134 | 11.0 | 3201 | 1.7206 | 0.7593 |
| 0.0134 | 12.0 | 3492 | 1.8008 | 0.7542 |
| 0.0059 | 13.0 | 3783 | 1.8056 | 0.7746 |
| 0.002 | 14.0 | 4074 | 1.9160 | 0.7593 |
| 0.002 | 15.0 | 4365 | 2.0223 | 0.7606 |
| 0.0052 | 16.0 | 4656 | 1.9112 | 0.7810 |
| 0.0052 | 17.0 | 4947 | 1.9040 | 0.7772 |
| 0.0056 | 18.0 | 5238 | 1.9852 | 0.7734 |
| 0.0061 | 19.0 | 5529 | 2.0590 | 0.7644 |
| 0.0061 | 20.0 | 5820 | 2.1078 | 0.7631 |
| 0.0044 | 21.0 | 6111 | 2.1177 | 0.7631 |
| 0.0044 | 22.0 | 6402 | 2.0983 | 0.7644 |
| 0.0012 | 23.0 | 6693 | 2.1384 | 0.7670 |
| 0.0012 | 24.0 | 6984 | 2.1467 | 0.7657 |
| 0.0018 | 25.0 | 7275 | 2.1368 | 0.7682 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
mrm8488/dqn-SpaceInvadersNoFrameskip-v4-2
|
mrm8488
| 2022-08-02T17:00:07Z | 6 | 1 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-02T16:59:39Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 181.00 +/- 111.42
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mrm8488 -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mrm8488
```
## Hyperparameters
```python
OrderedDict([('batch_size', 1024),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 800000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
elopezlopez/distilbert-base-uncased_fold_1_ternary_v1
|
elopezlopez
| 2022-08-02T16:12:18Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-02T14:33:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_1_ternary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_1_ternary_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1145
- F1: 0.7757
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 290 | 0.5580 | 0.7646 |
| 0.555 | 2.0 | 580 | 0.5820 | 0.7670 |
| 0.555 | 3.0 | 870 | 0.6683 | 0.7757 |
| 0.2633 | 4.0 | 1160 | 0.9137 | 0.7844 |
| 0.2633 | 5.0 | 1450 | 1.1367 | 0.7708 |
| 0.1148 | 6.0 | 1740 | 1.2192 | 0.7757 |
| 0.0456 | 7.0 | 2030 | 1.4035 | 0.7633 |
| 0.0456 | 8.0 | 2320 | 1.5185 | 0.7658 |
| 0.0226 | 9.0 | 2610 | 1.6126 | 0.7782 |
| 0.0226 | 10.0 | 2900 | 1.7631 | 0.7658 |
| 0.0061 | 11.0 | 3190 | 1.7279 | 0.7794 |
| 0.0061 | 12.0 | 3480 | 1.8548 | 0.7584 |
| 0.0076 | 13.0 | 3770 | 1.9052 | 0.7646 |
| 0.0061 | 14.0 | 4060 | 1.9100 | 0.7757 |
| 0.0061 | 15.0 | 4350 | 1.9280 | 0.7732 |
| 0.0025 | 16.0 | 4640 | 1.9991 | 0.7745 |
| 0.0025 | 17.0 | 4930 | 1.9960 | 0.7757 |
| 0.0035 | 18.0 | 5220 | 2.0018 | 0.7708 |
| 0.0015 | 19.0 | 5510 | 2.1099 | 0.7646 |
| 0.0015 | 20.0 | 5800 | 2.1061 | 0.7695 |
| 0.0022 | 21.0 | 6090 | 2.0941 | 0.7757 |
| 0.0022 | 22.0 | 6380 | 2.0967 | 0.7794 |
| 0.0005 | 23.0 | 6670 | 2.1133 | 0.7745 |
| 0.0005 | 24.0 | 6960 | 2.1042 | 0.7782 |
| 0.0021 | 25.0 | 7250 | 2.1145 | 0.7757 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ligerre/xlm-roberta-base-finetuned-panx-en
|
ligerre
| 2022-08-02T16:04:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-02T15:48:23Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.7032474804031354
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3932
- F1: 0.7032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1504 | 1.0 | 50 | 0.5992 | 0.4786 |
| 0.5147 | 2.0 | 100 | 0.4307 | 0.6468 |
| 0.3717 | 3.0 | 150 | 0.3932 | 0.7032 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Makabaka/bert-base-uncased-EnglishLawAI
|
Makabaka
| 2022-08-02T15:51:02Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-15T15:50:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-issues-128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5503
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6214 | 1.0 | 291 | 2.2471 |
| 2.0594 | 2.0 | 582 | 1.9293 |
| 1.8563 | 3.0 | 873 | 1.7961 |
| 1.7442 | 4.0 | 1164 | 1.7518 |
| 1.657 | 5.0 | 1455 | 1.7390 |
| 1.577 | 6.0 | 1746 | 1.7173 |
| 1.5071 | 7.0 | 2037 | 1.6223 |
| 1.4661 | 8.0 | 2328 | 1.5691 |
| 1.4365 | 9.0 | 2619 | 1.6280 |
| 1.3827 | 10.0 | 2910 | 1.4641 |
| 1.3517 | 11.0 | 3201 | 1.6498 |
| 1.3294 | 12.0 | 3492 | 1.3006 |
| 1.2836 | 13.0 | 3783 | 1.6520 |
| 1.2867 | 14.0 | 4074 | 1.6064 |
| 1.2819 | 15.0 | 4365 | 1.4131 |
| 1.2611 | 16.0 | 4656 | 1.5503 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ligerre/xlm-roberta-base-finetuned-panx-it
|
ligerre
| 2022-08-02T15:48:11Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-02T15:32:21Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8245828245828245
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2401
- F1: 0.8246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8187 | 1.0 | 70 | 0.3325 | 0.7337 |
| 0.2829 | 2.0 | 140 | 0.2554 | 0.8003 |
| 0.1894 | 3.0 | 210 | 0.2401 | 0.8246 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huggingtweets/iamsamirarora-naval-vivek_investor
|
huggingtweets
| 2022-08-02T15:16:48Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-02T15:15:22Z |
---
language: en
thumbnail: http://www.huggingtweets.com/iamsamirarora-naval-vivek_investor/1659453403535/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1256841238298292232/ycqwaMI2_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/853146176295759872/YiAPXQ0s_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1479277051802574853/qs6u-imt_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Naval & Samir Arora & Vivek</div>
<div style="text-align: center; font-size: 14px;">@iamsamirarora-naval-vivek_investor</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Naval & Samir Arora & Vivek.
| Data | Naval | Samir Arora | Vivek |
| --- | --- | --- | --- |
| Tweets downloaded | 3211 | 3250 | 3250 |
| Retweets | 195 | 76 | 96 |
| Short tweets | 612 | 973 | 601 |
| Tweets kept | 2404 | 2201 | 2553 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1oa4j8zi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @iamsamirarora-naval-vivek_investor's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/21s56oiv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/21s56oiv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/iamsamirarora-naval-vivek_investor')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ligerre/xlm-roberta-base-finetuned-panx-de
|
ligerre
| 2022-08-02T14:39:34Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-02T14:16:11Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.863677639046538
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1343
- F1: 0.8637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2578 | 1.0 | 525 | 0.1562 | 0.8273 |
| 0.1297 | 2.0 | 1050 | 0.1330 | 0.8474 |
| 0.0809 | 3.0 | 1575 | 0.1343 | 0.8637 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Sotireas/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT
|
Sotireas
| 2022-08-02T13:43:18Z | 28 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT
This model is a fine-tuned version of [Sotireas/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT](https://huggingface.co/Sotireas/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 21 | 3.8118 |
| No log | 2.0 | 42 | 3.5006 |
| No log | 3.0 | 63 | 3.1242 |
| No log | 4.0 | 84 | 2.9528 |
| No log | 5.0 | 105 | 2.9190 |
| No log | 6.0 | 126 | 2.9876 |
| No log | 7.0 | 147 | 3.0574 |
| No log | 8.0 | 168 | 3.0718 |
| No log | 9.0 | 189 | 3.0426 |
| No log | 10.0 | 210 | 3.0853 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
s-nlp/GenChal_2022_nigula
|
s-nlp
| 2022-08-02T13:43:11Z | 11 | 2 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"feedback comment generation for writing learning",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-08T15:17:59Z |
---
language:
- en
tags:
- feedback comment generation for writing learning
licenses:
- cc-by-nc-sa
---
## Model overview
This model was trained in terms of [GenChal 2022: Feedback Comment Generation for Writing Learning](https://fcg.sharedtask.org/) shared task
In this task, the model gets the string with text with the error and the exact span of the error and should return the comment in natural language, which explains the nature of the error.
## How to use
```python
!pip install feedback_generation_nigula
from feedback_generation_nigula.generator import FeedbackGenerator
fg = FeedbackGenerator(cuda_index = 0)
text_with_error = "The smoke flow my face ."
error_span = (10,17)
fg.get_feedback([text_with_error ], [error_span ])
# expected output ["When the <verb> <<flow>> is used as an <intransitive verb> to express'' to move in a stream'', a <preposition> needs to be placed to indicate the direction"]
```
## Model training details
#### Data
The data was provided in the following way
```
input sentence [\t] offset range [\t] feedback comment
```
Here are some examples
```
The smoke flow my face . 10:17 When the <verb> <<flow>> is used as an <intransitive verb> to express ''to move in a stream'', a <preposition> needs to be placed to indicate the direction. 'To' and 'towards' are <prepositions> that indicate direction.
I want to stop smoking during driving bicycle . 23:29 A <gerund> does not normally follow the <preposition> <<during>>. Think of an expression using the <conjunction> 'while' instead of a <preposition>.
```
Grammar termins are highlighted with '< ... >' marks and word examples - with '<< ... >>'
#### Data preprocessing
We lowercased the text, split it from any punctuation, including task specific marks (<< >>) and explicitly pointed out the error in the original text using << >>.
```
the smoke < < flow > > < < my > > face . 10:17 When the < verb > < < flow > > is used as an < intransitive verb > to express '' to move in a stream '', a < preposition > needs to be placed to indicate the direction. ' to ' and ' towards ' are < prepositions > that indicate direction .
i want to stop smoking < < during > > driving bicycle . 23:29 a < gerund > does not normally follow the < preposition > < < during > > . think of an expression using the < conjunction > ' while ' instead of a < preposition > .
```
#### Data augmentation
The main feature of our training pipeline was data augmentation. The idea of the augmentation is as follows: we cut the existing text with error after the last word which was syntactically connected to the words inside the error span (syntactic dependencies were automatically parsed with spacy) and this cut version of the text with error was used as a prompt for language model (we used [GPT-Neo 1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B)).
Using both initial and augmented data we fine-tuned [t5-large](https://huggingface.co/t5-large).
## Licensing Information
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png
|
Petros89/bert-finetuned-ner
|
Petros89
| 2022-08-02T13:19:01Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-02T13:00:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9320436507936508
- name: Recall
type: recall
value: 0.9486704813194211
- name: F1
type: f1
value: 0.9402835696413678
- name: Accuracy
type: accuracy
value: 0.9861217401542356
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0611
- Precision: 0.9320
- Recall: 0.9487
- F1: 0.9403
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0889 | 1.0 | 1756 | 0.0748 | 0.9060 | 0.9263 | 0.9160 | 0.9800 |
| 0.0381 | 2.0 | 3512 | 0.0631 | 0.9296 | 0.9468 | 0.9381 | 0.9855 |
| 0.0205 | 3.0 | 5268 | 0.0611 | 0.9320 | 0.9487 | 0.9403 | 0.9861 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.7.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
LawalAfeez/en-fr-translation
|
LawalAfeez
| 2022-08-02T12:34:19Z | 13 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-02T12:30:18Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: en-fr-translation
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# en-fr-translation
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.7838
- Validation Loss: 1.5505
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.9137 | 1.6092 | 0 |
| 1.7838 | 1.5505 | 1 |
### Framework versions
- Transformers 4.21.0
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
sepidmnorozy/finetuned-sentiment-withGPU
|
sepidmnorozy
| 2022-08-02T12:33:09Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-04T13:26:21Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuning-sentiment-model_withGPU
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-10-samples_withGPU
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3893
- Accuracy: 0.8744
- F1: 0.8684
- Precision: 0.9126
- Recall: 0.8283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.3631 | 1.0 | 7088 | 0.3622 | 0.8638 | 0.8519 | 0.9334 | 0.7835 |
| 0.35 | 2.0 | 14176 | 0.3875 | 0.8714 | 0.8622 | 0.9289 | 0.8044 |
| 0.3262 | 3.0 | 21264 | 0.3893 | 0.8744 | 0.8684 | 0.9126 | 0.8283 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
pannaga/wav2vec2-base-timit-demo-google-colab-testing
|
pannaga
| 2022-08-02T12:18:36Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-21T10:06:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab-testing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab-testing
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
wenkai-li/distilbert-base-uncased-finetuned-wikiandmark_epoch50
|
wenkai-li
| 2022-08-02T12:11:19Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-02T11:02:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-wikiandmark_epoch50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-wikiandmark_epoch50
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0049
- eval_accuracy: 0.9995
- eval_runtime: 29.1585
- eval_samples_per_second: 127.613
- eval_steps_per_second: 4.013
- epoch: 6.0
- step: 4656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
pannaga/wav2vec2-large-xls-r-300m-turkish-colab
|
pannaga
| 2022-08-02T11:51:00Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-27T10:22:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9701
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 4.3108 | 16.0 | 400 | 2.9378 | 1.0 |
| 3.0115 | 32.0 | 800 | 2.9701 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
dfsj/distilbert-base-uncased-distilled-clinc
|
dfsj
| 2022-08-02T11:38:29Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-01T00:46:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9448387096774193
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3163
- Accuracy: 0.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 2.3518 | 0.7510 |
| 2.7559 | 2.0 | 636 | 1.2235 | 0.8506 |
| 2.7559 | 3.0 | 954 | 0.6786 | 0.9168 |
| 1.0767 | 4.0 | 1272 | 0.4668 | 0.9368 |
| 0.4584 | 5.0 | 1590 | 0.3810 | 0.9410 |
| 0.4584 | 6.0 | 1908 | 0.3479 | 0.9435 |
| 0.2876 | 7.0 | 2226 | 0.3282 | 0.9455 |
| 0.2285 | 8.0 | 2544 | 0.3201 | 0.9452 |
| 0.2285 | 9.0 | 2862 | 0.3163 | 0.9448 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu102
- Datasets 2.0.0
- Tokenizers 0.12.1
|
JmPaunlagui/Improve
|
JmPaunlagui
| 2022-08-02T10:17:55Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2022-08-02T09:42:09Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| name | learning_rate | decay | beta_1 | beta_2 | epsilon | amsgrad | training_precision |
|----|-------------|-----|------|------|-------|-------|------------------|
|Adam|0.001|0.0|0.9|0.999|1e-07|False|float32|
|
DrY/bert-finetuned-squad
|
DrY
| 2022-08-02T10:16:43Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-02T07:52:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
dfsj/distilbert-base-uncased-finetuned-clinc
|
dfsj
| 2022-08-02T10:08:48Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-31T12:46:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9187096774193548
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7737
- Accuracy: 0.9187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2909 | 0.7439 |
| 3.7915 | 2.0 | 636 | 1.8815 | 0.83 |
| 3.7915 | 3.0 | 954 | 1.1550 | 0.8948 |
| 1.6979 | 4.0 | 1272 | 0.8583 | 0.9119 |
| 0.8991 | 5.0 | 1590 | 0.7737 | 0.9187 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu102
- Datasets 2.0.0
- Tokenizers 0.12.1
|
yashwantk/distilbert-base-uncased-finetuned-squad
|
yashwantk
| 2022-08-02T09:05:20Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-07-31T08:07:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2491
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2862 | 1.0 | 8235 | 1.2491 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
jinghan/roberta-base-finetuned-wnli
|
jinghan
| 2022-08-02T09:04:37Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-02T08:49:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: roberta-base-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: wnli
split: train
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-wnli
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6880
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 0.6880 | 0.5634 |
| No log | 2.0 | 80 | 0.6851 | 0.5634 |
| No log | 3.0 | 120 | 0.6961 | 0.4366 |
| No log | 4.0 | 160 | 0.6906 | 0.5634 |
| No log | 5.0 | 200 | 0.6891 | 0.5634 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
commanderstrife/PV-Bio_clinicalBERT-superset
|
commanderstrife
| 2022-08-02T08:58:17Z | 7 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"token-classification",
"generated_from_trainer",
"dataset:pv_dataset",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-02T05:36:04Z |
---
tags:
- generated_from_trainer
datasets:
- pv_dataset
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: PV-Bio_clinicalBERT-superset
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: pv_dataset
type: pv_dataset
config: PVDatasetCorpus
split: train
args: PVDatasetCorpus
metrics:
- name: Precision
type: precision
value: 0.7055946686730801
- name: Recall
type: recall
value: 0.7473672226333467
- name: F1
type: f1
value: 0.7258804666334938
- name: Accuracy
type: accuracy
value: 0.9656573815513143
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PV-Bio_clinicalBERT-superset
This model is a fine-tuned version of [giacomomiolo/electramed_base_scivocab_1M](https://huggingface.co/giacomomiolo/electramed_base_scivocab_1M) on the pv_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2082
- Precision: 0.7056
- Recall: 0.7474
- F1: 0.7259
- Accuracy: 0.9657
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.063 | 1.0 | 1813 | 0.1061 | 0.6453 | 0.7306 | 0.6853 | 0.9623 |
| 0.0086 | 2.0 | 3626 | 0.1068 | 0.6620 | 0.7516 | 0.7040 | 0.9647 |
| 0.0089 | 3.0 | 5439 | 0.1265 | 0.7026 | 0.7300 | 0.7160 | 0.9657 |
| 0.004 | 4.0 | 7252 | 0.1369 | 0.6820 | 0.7601 | 0.7189 | 0.9638 |
| 0.0004 | 5.0 | 9065 | 0.1573 | 0.6937 | 0.7602 | 0.7254 | 0.9656 |
| 0.0184 | 6.0 | 10878 | 0.1707 | 0.7078 | 0.7475 | 0.7271 | 0.9662 |
| 0.0009 | 7.0 | 12691 | 0.1787 | 0.7116 | 0.7398 | 0.7254 | 0.9662 |
| 0.0006 | 8.0 | 14504 | 0.1874 | 0.6979 | 0.7576 | 0.7265 | 0.9655 |
| 0.0008 | 9.0 | 16317 | 0.1970 | 0.7083 | 0.7475 | 0.7273 | 0.9660 |
| 0.0003 | 10.0 | 18130 | 0.2082 | 0.7056 | 0.7474 | 0.7259 | 0.9657 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
silviacamplani/distilbert-base-uncased-finetuned-ner-conll2003_100train
|
silviacamplani
| 2022-08-02T08:55:52Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-02T08:54:21Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: silviacamplani/distilbert-base-uncased-finetuned-ner-conll2003_100train
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# silviacamplani/distilbert-base-uncased-finetuned-ner-conll2003_100train
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.4072
- Validation Loss: 1.4582
- Train Precision: 0.0
- Train Recall: 0.0
- Train F1: 0.0
- Train Accuracy: 0.7920
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 2.0837 | 1.8526 | 0.0013 | 0.0015 | 0.0014 | 0.7006 | 0 |
| 1.6450 | 1.5672 | 0.0 | 0.0 | 0.0 | 0.7916 | 1 |
| 1.4072 | 1.4582 | 0.0 | 0.0 | 0.0 | 0.7920 | 2 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
kyoumiaoi/wav2vec2-base-timit-demo-google-colab
|
kyoumiaoi
| 2022-08-02T08:28:06Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-08-02T06:15:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5499
- Wer: 0.3435
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.599 | 1.0 | 500 | 2.1267 | 0.9976 |
| 1.016 | 2.01 | 1000 | 0.6193 | 0.5443 |
| 0.5299 | 3.01 | 1500 | 0.5324 | 0.4889 |
| 0.3626 | 4.02 | 2000 | 0.4525 | 0.4402 |
| 0.2854 | 5.02 | 2500 | 0.4266 | 0.4233 |
| 0.2373 | 6.02 | 3000 | 0.4713 | 0.4082 |
| 0.1979 | 7.03 | 3500 | 0.4778 | 0.4018 |
| 0.1761 | 8.03 | 4000 | 0.4585 | 0.3947 |
| 0.1537 | 9.04 | 4500 | 0.5297 | 0.3946 |
| 0.1379 | 10.04 | 5000 | 0.4988 | 0.3856 |
| 0.124 | 11.04 | 5500 | 0.5262 | 0.3852 |
| 0.11 | 12.05 | 6000 | 0.5545 | 0.3854 |
| 0.106 | 13.05 | 6500 | 0.5196 | 0.3805 |
| 0.0918 | 14.06 | 7000 | 0.4515 | 0.3655 |
| 0.0829 | 15.06 | 7500 | 0.5087 | 0.3722 |
| 0.0775 | 16.06 | 8000 | 0.4980 | 0.3781 |
| 0.0685 | 17.07 | 8500 | 0.5564 | 0.3650 |
| 0.0655 | 18.07 | 9000 | 0.5323 | 0.3672 |
| 0.0578 | 19.08 | 9500 | 0.5675 | 0.3637 |
| 0.052 | 20.08 | 10000 | 0.5604 | 0.3664 |
| 0.0512 | 21.08 | 10500 | 0.5922 | 0.3804 |
| 0.0431 | 22.09 | 11000 | 0.6379 | 0.3754 |
| 0.0428 | 23.09 | 11500 | 0.5905 | 0.3764 |
| 0.0393 | 24.1 | 12000 | 0.5667 | 0.3542 |
| 0.0326 | 25.1 | 12500 | 0.5612 | 0.3537 |
| 0.0289 | 26.1 | 13000 | 0.5618 | 0.3475 |
| 0.0298 | 27.11 | 13500 | 0.5578 | 0.3439 |
| 0.0264 | 28.11 | 14000 | 0.5547 | 0.3433 |
| 0.026 | 29.12 | 14500 | 0.5499 | 0.3435 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
yirenl2/plm_qa
|
yirenl2
| 2022-08-02T06:43:12Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"en",
"dataset:squad_v2",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-01T03:06:27Z |
---
language: en
datasets:
- squad_v2
license: cc-by-4.0
model-index:
- name: plm_qa
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 0
verified: false
- name: F1
type: f1
value: 0
verified: false
- name: total
type: total
value: 11869
verified: false
---
# roberta-base for QA finetuned over community safety domain data
We fine-tuned the roBERTa-based model (https://huggingface.co/deepset/roberta-base-squad2) over LiveSafe community safety dialogue data for event argument extraction with the objective of question-answering.
### Using model in Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "yirenl2/plm_qa"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the location of the incident?',
'context': 'I was attacked by someone in front of the bus station.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
|
huggingtweets/itsjefftiedrich
|
huggingtweets
| 2022-08-02T02:50:29Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-02T02:48:45Z |
---
language: en
thumbnail: http://www.huggingtweets.com/itsjefftiedrich/1659408624518/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1009932396333031424/8FzKlCfB_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jeff Tiedrich</div>
<div style="text-align: center; font-size: 14px;">@itsjefftiedrich</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jeff Tiedrich.
| Data | Jeff Tiedrich |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 6 |
| Short tweets | 753 |
| Tweets kept | 2491 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/311xv04i/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @itsjefftiedrich's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2zwvvvq6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2zwvvvq6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/itsjefftiedrich')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
rdruce/ddpm-cheese-32
|
rdruce
| 2022-08-02T00:34:19Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-02T00:05:54Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-cheese-32
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/rdruce/ddpm-cheese-32/tensorboard?#scalars)
|
muhtasham/bert-tiny-finetuned-wnut17-ner
|
muhtasham
| 2022-08-01T23:26:22Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-01T23:24:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-tiny-finetuned-wnut17-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: train
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.0
- name: Recall
type: recall
value: 0.0
- name: F1
type: f1
value: 0.0
- name: Accuracy
type: accuracy
value: 0.8960890010322284
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-tiny-finetuned-wnut17-ner
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6054
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.8961
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 27 | 1.1060 | 0.0 | 0.0 | 0.0 | 0.8961 |
| No log | 2.0 | 54 | 0.9075 | 0.0 | 0.0 | 0.0 | 0.8961 |
| No log | 3.0 | 81 | 0.7978 | 0.0 | 0.0 | 0.0 | 0.8961 |
| No log | 4.0 | 108 | 0.7333 | 0.0 | 0.0 | 0.0 | 0.8961 |
| No log | 5.0 | 135 | 0.6929 | 0.0 | 0.0 | 0.0 | 0.8961 |
| No log | 6.0 | 162 | 0.6661 | 0.0 | 0.0 | 0.0 | 0.8961 |
| No log | 7.0 | 189 | 0.6477 | 0.0 | 0.0 | 0.0 | 0.8961 |
| No log | 8.0 | 216 | 0.6346 | 0.0 | 0.0 | 0.0 | 0.8961 |
| No log | 9.0 | 243 | 0.6251 | 0.0 | 0.0 | 0.0 | 0.8961 |
| No log | 10.0 | 270 | 0.6182 | 0.0 | 0.0 | 0.0 | 0.8961 |
| No log | 11.0 | 297 | 0.6132 | 0.0 | 0.0 | 0.0 | 0.8961 |
| No log | 12.0 | 324 | 0.6097 | 0.0 | 0.0 | 0.0 | 0.8961 |
| No log | 13.0 | 351 | 0.6073 | 0.0 | 0.0 | 0.0 | 0.8961 |
| No log | 14.0 | 378 | 0.6059 | 0.0 | 0.0 | 0.0 | 0.8961 |
| No log | 15.0 | 405 | 0.6054 | 0.0 | 0.0 | 0.0 | 0.8961 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
muhtasham/bert-tiny-finetuned-xglue-ner
|
muhtasham
| 2022-08-01T23:20:07Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:xglue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-01T23:13:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xglue
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-tiny-finetuned-xglue-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xglue
type: xglue
config: ner
split: train
args: ner
metrics:
- name: Precision
type: precision
value: 0.630759453447728
- name: Recall
type: recall
value: 0.6681252103668799
- name: F1
type: f1
value: 0.6489048708728343
- name: Accuracy
type: accuracy
value: 0.9274310133922189
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-tiny-finetuned-xglue-ner
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the xglue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2489
- Precision: 0.6308
- Recall: 0.6681
- F1: 0.6489
- Accuracy: 0.9274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4082 | 1.0 | 1756 | 0.3326 | 0.5600 | 0.5798 | 0.5697 | 0.9118 |
| 0.2974 | 2.0 | 3512 | 0.2635 | 0.6143 | 0.6562 | 0.6346 | 0.9248 |
| 0.2741 | 3.0 | 5268 | 0.2489 | 0.6308 | 0.6681 | 0.6489 | 0.9274 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
BigSalmon/InformalToFormalLincoln60Paraphrase
|
BigSalmon
| 2022-08-01T21:24:02Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-01T20:53:51Z |
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln60Paraphrase")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln60Paraphrase")
```
```
prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:"""
input_ids = tokenizer.encode(prompt, return_tensors='pt')
outputs = model.generate(input_ids=input_ids,
max_length=10 + len(prompt),
temperature=1.0,
top_k=50,
top_p=0.95,
do_sample=True,
num_return_sequences=5,
early_stopping=True)
for i in range(5):
print(tokenizer.decode(outputs[i]))
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- nebraska
- unicamerical legislature
- different from federal house and senate
text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate.
***
- penny has practically no value
- should be taken out of circulation
- just as other coins have been in us history
- lost use
- value not enough
- to make environmental consequences worthy
text: all but valueless, the penny should be retired. as with other coins in american history, it has become defunct. too minute to warrant the environmental consequences of its production, it has outlived its usefulness.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: had trouble deciding.
translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation.
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
```
```
first: ( was complicit in / was involved in ).
antonym: ( was blameless / was not an accomplice to / had no hand in / was uninvolved in ).
***
first: ( have no qualms about / see no issue with ).
antonym: ( are deeply troubled by / harbor grave reservations about / have a visceral aversion to / take ( umbrage at / exception to ) / are wary of ).
***
first: ( do not see eye to eye / disagree often ).
antonym: ( are in sync / are united / have excellent rapport / are like-minded / are in step / are of one mind / are in lockstep / operate in perfect harmony / march in lockstep ).
***
first:
```
```
stiff with competition, law school {A} is the launching pad for countless careers, {B} is a crowded field, {C} ranks among the most sought-after professional degrees, {D} is a professional proving ground.
***
languishing in viewership, saturday night live {A} is due for a creative renaissance, {B} is no longer a ratings juggernaut, {C} has been eclipsed by its imitators, {C} can still find its mojo.
***
dubbed the "manhattan of the south," atlanta {A} is a bustling metropolis, {B} is known for its vibrant downtown, {C} is a city of rich history, {D} is the pride of georgia.
***
embattled by scandal, harvard {A} is feeling the heat, {B} cannot escape the media glare, {C} is facing its most intense scrutiny yet, {D} is in the spotlight for all the wrong reasons.
```
|
Intel/bert-base-uncased-sparse-80-1x4-block-pruneofa
|
Intel
| 2022-08-01T21:06:46Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:2111.05754",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-29T11:58:55Z |
---
language: en
license: apache-2.0
tags:
- fill-mask
datasets:
- wikipedia
- bookcorpus
---
# 80% 1x4 Block Sparse BERT-Base (uncased) Prune OFA
This model is was created using Prune OFA method described in [Prune Once for All: Sparse Pre-Trained Language Models](https://arxiv.org/abs/2111.05754) presented in ENLSP NeurIPS Workshop 2021.
For further details on the model and its result, see our paper and our implementation available [here](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all).
|
Intel/bert-large-uncased-sparse-80-1x4-block-pruneofa
|
Intel
| 2022-08-01T21:05:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:2111.05754",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-29T11:42:51Z |
---
language: en
license: apache-2.0
tags:
- fill-mask
datasets:
- wikipedia
- bookcorpus
---
# 80% 1x4 Block Sparse BERT-Large (uncased) Prune OFA
This model is was created using Prune OFA method described in [Prune Once for All: Sparse Pre-Trained Language Models](https://arxiv.org/abs/2111.05754) presented in ENLSP NeurIPS Workshop 2021.
For further details on the model and its result, see our paper and our implementation available [here](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all).
|
Intel/bert-large-uncased-squadv1.1-sparse-80-1x4-block-pruneofa
|
Intel
| 2022-08-01T21:04:22Z | 75 | 1 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"bert",
"question-answering",
"en",
"arxiv:2111.05754",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-27T20:17:27Z |
---
language: en
license: apache-2.0
---
# 80% 1x4 Block Sparse BERT-Large (uncased) Fine Tuned on SQuADv1.1
This model is a result of fine-tuning a Prune OFA 80% 1x4 block sparse pre-trained BERT-Large combined with knowledge distillation.
This model yields the following results on SQuADv1.1 development set:<br>
`{"exact_match": 84.673, "f1": 91.174}`
For further details see our paper, [Prune Once for All: Sparse Pre-Trained Language Models](https://arxiv.org/abs/2111.05754), and our open source implementation available [here](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all).
|
mrm8488/pyramidsrnd
|
mrm8488
| 2022-08-01T20:36:43Z | 9 | 1 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-08-01T20:36:37Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: mrm8488/pyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
SharpAI/mal-tls-bert-base-relu-w8a8
|
SharpAI
| 2022-08-01T20:23:16Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-01T20:22:51Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: mal_tls-bert-base-relu-w8a8
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mal_tls-bert-base-relu-w8a8
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.10.3
|
Lvxue/finetuned-mt5-base
|
Lvxue
| 2022-08-01T19:33:37Z | 14 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"en",
"ro",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-28T01:51:27Z |
---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: finetuned-mt5-base
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 27.1659
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-mt5-base
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3594
- Bleu: 27.1659
- Gen Len: 43.9575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
vidyavenkappa/pegasus-samsum
|
vidyavenkappa
| 2022-08-01T18:30:17Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-30T12:10:24Z |
---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3086
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6151 | 0.54 | 500 | 1.4238 |
| 1.3357 | 1.09 | 1000 | 1.3629 |
| 1.4423 | 1.63 | 1500 | 1.3380 |
| 1.3747 | 2.17 | 2000 | 1.3218 |
| 1.3397 | 2.72 | 2500 | 1.3124 |
| 1.2706 | 3.26 | 3000 | 1.3149 |
| 1.1849 | 3.8 | 3500 | 1.3120 |
| 1.2222 | 4.35 | 4000 | 1.3120 |
| 1.2339 | 4.89 | 4500 | 1.3086 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
silviacamplani/twitter-roberta-base-finetuned-ner-wnut
|
silviacamplani
| 2022-08-01T16:26:39Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"roberta",
"token-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-01T15:50:19Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: silviacamplani/twitter-roberta-base-finetuned-ner-wnut
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# silviacamplani/twitter-roberta-base-finetuned-ner-wnut
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0812
- Validation Loss: 0.2553
- Train Precision: 0.6263
- Train Recall: 0.5191
- Train F1: 0.5677
- Train Accuracy: 0.9398
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'Adam', 'config': {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 636, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.0813 | 0.2553 | 0.6263 | 0.5191 | 0.5677 | 0.9398 | 0 |
| 0.0815 | 0.2553 | 0.6263 | 0.5191 | 0.5677 | 0.9398 | 1 |
| 0.0812 | 0.2553 | 0.6263 | 0.5191 | 0.5677 | 0.9398 | 2 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
meln1k/a2c-HalfCheetahBulletEnv-v0
|
meln1k
| 2022-08-01T14:54:29Z | 7 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"HalfCheetahBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-01T11:19:36Z |
---
library_name: stable-baselines3
tags:
- HalfCheetahBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 1893.95 +/- 69.15
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HalfCheetahBulletEnv-v0
type: HalfCheetahBulletEnv-v0
---
# **A2C** Agent playing **HalfCheetahBulletEnv-v0**
This is a trained model of a **A2C** agent playing **HalfCheetahBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
turhancan97/Reinforce-1
|
turhancan97
| 2022-08-01T14:02:54Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-01T14:02:43Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- metrics:
- type: mean_reward
value: 98.30 +/- 25.19
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
rdruce/ddpm-butterflies-128
|
rdruce
| 2022-08-01T12:46:38Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-01T11:33:05Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/rdruce/ddpm-butterflies-128/tensorboard?#scalars)
|
aminjalali/distilbert-base-uncased-finetuned-emotion
|
aminjalali
| 2022-08-01T11:56:34Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-31T19:26:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9258000202272497
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2123
- Accuracy: 0.926
- F1: 0.9258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8198 | 1.0 | 250 | 0.3147 | 0.904 | 0.9003 |
| 0.2438 | 2.0 | 500 | 0.2123 | 0.926 | 0.9258 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
dminiotas05/distilbert-base-uncased-finetuned-ft750_reg3
|
dminiotas05
| 2022-08-01T11:51:26Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-01T11:22:10Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ft750_reg3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ft750_reg3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6143
- Mse: 0.6143
- Mae: 0.6022
- R2: 0.4218
- Accuracy: 0.52
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:--------:|
| 0.5241 | 1.0 | 188 | 0.6143 | 0.6143 | 0.6022 | 0.4218 | 0.52 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Qilex/VirtualPetDiffusion
|
Qilex
| 2022-08-01T10:56:07Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-07-31T16:02:28Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# neoGen3
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: no
### Training results
📈 [TensorBoard logs](https://huggingface.co/Qilex/neoGen3/tensorboard?#scalars)
|
reachrkr/Reinforce-Pong-PLE-v0
|
reachrkr
| 2022-08-01T10:55:41Z | 0 | 0 | null |
[
"Pong-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-01T06:01:48Z |
---
tags:
- Pong-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pong-PLE-v0
results:
- metrics:
- type: mean_reward
value: -16.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-PLE-v0
type: Pong-PLE-v0
---
# **Reinforce** Agent playing **Pong-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pong-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
dminiotas05/camembert-base-finetuned-ft750_reg2
|
dminiotas05
| 2022-08-01T10:10:20Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"camembert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-28T11:03:55Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: camembert-base-finetuned-ft750_reg2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-base-finetuned-ft750_reg2
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6449
- Mse: 0.6449
- Mae: 0.6171
- R2: 0.3929
- Accuracy: 0.504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:--------:|
| 0.6283 | 1.0 | 750 | 0.6074 | 0.6074 | 0.6086 | 0.4282 | 0.4887 |
| 0.5007 | 2.0 | 1500 | 0.6449 | 0.6449 | 0.6171 | 0.3929 | 0.504 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
lakshaywadhwa1993/ner_hindi_bert
|
lakshaywadhwa1993
| 2022-08-01T09:14:58Z | 8 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-01T09:05:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikiann
model-index:
- name: ner_hindi_bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner_hindi_bert
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3713
- Overall Precision: 0.8942
- Overall Recall: 0.8972
- Overall F1: 0.8957
- Overall Accuracy: 0.9367
- Loc F1: 0.8766
- Org F1: 0.8489
- Per F1: 0.9454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | Loc F1 | Org F1 | Per F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|:------:|:------:|:------:|
| 0.2993 | 3.19 | 1000 | 0.3230 | 0.8779 | 0.8786 | 0.8782 | 0.9244 | 0.8535 | 0.8270 | 0.9358 |
| 0.0641 | 6.39 | 2000 | 0.3713 | 0.8942 | 0.8972 | 0.8957 | 0.9367 | 0.8766 | 0.8489 | 0.9454 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
psroy/wav2vec2-base-timit-demo-colab
|
psroy
| 2022-08-01T08:59:16Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-29T10:16:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4772
- Wer: 0.2821
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.6949 | 0.87 | 500 | 2.4599 | 0.9999 |
| 0.9858 | 1.73 | 1000 | 0.5249 | 0.4674 |
| 0.4645 | 2.6 | 1500 | 0.4604 | 0.3900 |
| 0.3273 | 3.46 | 2000 | 0.3939 | 0.3612 |
| 0.2474 | 4.33 | 2500 | 0.4150 | 0.3560 |
| 0.2191 | 5.19 | 3000 | 0.3855 | 0.3344 |
| 0.1662 | 6.06 | 3500 | 0.3779 | 0.3258 |
| 0.1669 | 6.92 | 4000 | 0.4841 | 0.3286 |
| 0.151 | 7.79 | 4500 | 0.4182 | 0.3219 |
| 0.1175 | 8.65 | 5000 | 0.4194 | 0.3107 |
| 0.1103 | 9.52 | 5500 | 0.4256 | 0.3129 |
| 0.1 | 10.38 | 6000 | 0.4352 | 0.3089 |
| 0.0949 | 11.25 | 6500 | 0.4649 | 0.3160 |
| 0.0899 | 12.11 | 7000 | 0.4472 | 0.3065 |
| 0.0787 | 12.98 | 7500 | 0.4763 | 0.3128 |
| 0.0742 | 13.84 | 8000 | 0.4321 | 0.3034 |
| 0.067 | 14.71 | 8500 | 0.4562 | 0.3076 |
| 0.063 | 15.57 | 9000 | 0.4541 | 0.3102 |
| 0.0624 | 16.44 | 9500 | 0.5113 | 0.3040 |
| 0.0519 | 17.3 | 10000 | 0.4925 | 0.3008 |
| 0.0525 | 18.17 | 10500 | 0.4710 | 0.2987 |
| 0.046 | 19.03 | 11000 | 0.4781 | 0.2977 |
| 0.0455 | 19.9 | 11500 | 0.4572 | 0.2969 |
| 0.0394 | 20.76 | 12000 | 0.5256 | 0.2966 |
| 0.0373 | 21.63 | 12500 | 0.4723 | 0.2921 |
| 0.0375 | 22.49 | 13000 | 0.4640 | 0.2847 |
| 0.0334 | 23.36 | 13500 | 0.4740 | 0.2917 |
| 0.0304 | 24.22 | 14000 | 0.4817 | 0.2874 |
| 0.0291 | 25.09 | 14500 | 0.4722 | 0.2896 |
| 0.0247 | 25.95 | 15000 | 0.4765 | 0.2870 |
| 0.0223 | 26.82 | 15500 | 0.4728 | 0.2821 |
| 0.0223 | 27.68 | 16000 | 0.4690 | 0.2834 |
| 0.0207 | 28.55 | 16500 | 0.4706 | 0.2825 |
| 0.0186 | 29.41 | 17000 | 0.4772 | 0.2821 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
lakshaywadhwa1993/ner_marathi_bert
|
lakshaywadhwa1993
| 2022-08-01T08:39:52Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-09T21:00:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikiann
model-index:
- name: ner_marathi_bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner_marathi_bert
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3606
- Overall Precision: 0.8939
- Overall Recall: 0.9030
- Overall F1: 0.8984
- Overall Accuracy: 0.9347
- Loc F1: 0.8823
- Org F1: 0.8555
- Per F1: 0.9435
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | Loc F1 | Org F1 | Per F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|:------:|:------:|:------:|
| 0.2961 | 3.19 | 1000 | 0.3496 | 0.8720 | 0.8841 | 0.8780 | 0.9229 | 0.8599 | 0.8210 | 0.9343 |
| 0.0613 | 6.39 | 2000 | 0.3606 | 0.8939 | 0.9030 | 0.8984 | 0.9347 | 0.8823 | 0.8555 | 0.9435 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
huggingtweets/kantegory
|
huggingtweets
| 2022-08-01T07:26:39Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-01T07:26:04Z |
---
language: en
thumbnail: http://www.huggingtweets.com/kantegory/1659338795219/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1122432883036172288/mYZ4acNy_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">David Dobryakov</div>
<div style="text-align: center; font-size: 14px;">@kantegory</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from David Dobryakov.
| Data | David Dobryakov |
| --- | --- |
| Tweets downloaded | 3017 |
| Retweets | 90 |
| Short tweets | 256 |
| Tweets kept | 2671 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1g9yc7mp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kantegory's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2aeg6rk1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2aeg6rk1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/kantegory')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
AykeeSalazar/vc-bantai-vit-withoutAMBI-adunest-v2
|
AykeeSalazar
| 2022-08-01T05:42:48Z | 54 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-08-01T04:42:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vc-bantai-vit-withoutAMBI-adunest-v2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
args: Violation-Classification---Raw-10
metrics:
- name: Accuracy
type: accuracy
value: 0.7705338809034907
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vc-bantai-vit-withoutAMBI-adunest-v2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8271
- Accuracy: 0.7705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.4 | 100 | 0.3811 | 0.8511 |
| No log | 0.81 | 200 | 0.3707 | 0.8609 |
| No log | 1.21 | 300 | 0.5708 | 0.7325 |
| No log | 1.61 | 400 | 0.3121 | 0.8778 |
| 0.3308 | 2.02 | 500 | 0.3358 | 0.8445 |
| 0.3308 | 2.42 | 600 | 0.2820 | 0.8768 |
| 0.3308 | 2.82 | 700 | 0.4825 | 0.7695 |
| 0.3308 | 3.23 | 800 | 0.3133 | 0.8640 |
| 0.3308 | 3.63 | 900 | 0.4509 | 0.8219 |
| 0.2028 | 4.03 | 1000 | 0.5426 | 0.7551 |
| 0.2028 | 4.44 | 1100 | 0.4886 | 0.8552 |
| 0.2028 | 4.84 | 1200 | 0.5649 | 0.7695 |
| 0.2028 | 5.24 | 1300 | 0.5925 | 0.7900 |
| 0.2028 | 5.65 | 1400 | 0.4203 | 0.8439 |
| 0.1471 | 6.05 | 1500 | 0.4275 | 0.8486 |
| 0.1471 | 6.45 | 1600 | 0.3683 | 0.8727 |
| 0.1471 | 6.85 | 1700 | 0.5709 | 0.8121 |
| 0.1471 | 7.26 | 1800 | 0.6209 | 0.7680 |
| 0.1471 | 7.66 | 1900 | 0.4971 | 0.8147 |
| 0.101 | 8.06 | 2000 | 0.8792 | 0.7567 |
| 0.101 | 8.47 | 2100 | 0.3288 | 0.8670 |
| 0.101 | 8.87 | 2200 | 0.3643 | 0.8342 |
| 0.101 | 9.27 | 2300 | 0.4883 | 0.8711 |
| 0.101 | 9.68 | 2400 | 0.2892 | 0.8943 |
| 0.0667 | 10.08 | 2500 | 0.5437 | 0.8398 |
| 0.0667 | 10.48 | 2600 | 0.5841 | 0.8450 |
| 0.0667 | 10.89 | 2700 | 0.8016 | 0.8219 |
| 0.0667 | 11.29 | 2800 | 0.6389 | 0.7772 |
| 0.0667 | 11.69 | 2900 | 0.3714 | 0.8753 |
| 0.0674 | 12.1 | 3000 | 0.9811 | 0.7130 |
| 0.0674 | 12.5 | 3100 | 0.6359 | 0.8101 |
| 0.0674 | 12.9 | 3200 | 0.5691 | 0.8285 |
| 0.0674 | 13.31 | 3300 | 0.6123 | 0.8316 |
| 0.0674 | 13.71 | 3400 | 0.3655 | 0.8978 |
| 0.0525 | 14.11 | 3500 | 0.4988 | 0.8583 |
| 0.0525 | 14.52 | 3600 | 0.6153 | 0.8450 |
| 0.0525 | 14.92 | 3700 | 0.4189 | 0.8881 |
| 0.0525 | 15.32 | 3800 | 0.9713 | 0.7967 |
| 0.0525 | 15.73 | 3900 | 1.1224 | 0.7967 |
| 0.0438 | 16.13 | 4000 | 0.5725 | 0.8578 |
| 0.0438 | 16.53 | 4100 | 0.4725 | 0.8532 |
| 0.0438 | 16.94 | 4200 | 0.4696 | 0.8640 |
| 0.0438 | 17.34 | 4300 | 0.4028 | 0.8789 |
| 0.0438 | 17.74 | 4400 | 0.9452 | 0.7746 |
| 0.0462 | 18.15 | 4500 | 0.4455 | 0.8783 |
| 0.0462 | 18.55 | 4600 | 0.6328 | 0.8311 |
| 0.0462 | 18.95 | 4700 | 0.6707 | 0.8296 |
| 0.0462 | 19.35 | 4800 | 0.7771 | 0.8429 |
| 0.0462 | 19.76 | 4900 | 1.2832 | 0.7408 |
| 0.0381 | 20.16 | 5000 | 0.5415 | 0.8737 |
| 0.0381 | 20.56 | 5100 | 0.8932 | 0.7977 |
| 0.0381 | 20.97 | 5200 | 0.5182 | 0.8691 |
| 0.0381 | 21.37 | 5300 | 0.5967 | 0.8794 |
| 0.0381 | 21.77 | 5400 | 0.8271 | 0.7705 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.