modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-05 06:27:37
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 539
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-05 06:27:15
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
NoCrypt/momocha-mix
|
NoCrypt
| 2022-11-10T06:49:03Z | 0 | 19 | null |
[
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2022-11-10T06:39:29Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Momocha mix models
Scrapped from [chenyfan's sharepoint](https://cyfan-my.sharepoint.com/:f:/g/personal/chenyfan_cyfan_onmicrosoft_com/EilOWB40m3ZJn6ahczIUIs4B6v0XvizO5YorOhG_5eYSUw?e=ZyP7qE)
Example output:

|
betelgeux/bert-base-uncased-issues-128
|
betelgeux
| 2022-11-10T05:21:31Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-09T07:16:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-issues-128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.3932 | 1.0 | 1409 | 2.0750 |
| 2.1659 | 2.0 | 2818 | 1.9781 |
| 2.0364 | 3.0 | 4227 | 2.1215 |
| 1.9399 | 4.0 | 5636 | 2.1018 |
| 1.8857 | 5.0 | 7045 | 1.9919 |
| 1.813 | 6.0 | 8454 | 2.2653 |
| 1.7505 | 7.0 | 9863 | 2.0857 |
| 1.7196 | 8.0 | 11272 | 1.9211 |
| 1.672 | 9.0 | 12681 | 1.9853 |
| 1.6379 | 10.0 | 14090 | 2.0391 |
| 1.6037 | 11.0 | 15499 | 1.9305 |
| 1.5699 | 12.0 | 16908 | 2.0291 |
| 1.5363 | 13.0 | 18317 | 2.0492 |
| 1.5155 | 14.0 | 19726 | 1.8807 |
| 1.4999 | 15.0 | 21135 | 1.8604 |
| 1.4784 | 16.0 | 22544 | 2.0348 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Terence3927/ppo-LunarLander-v2-optuna
|
Terence3927
| 2022-11-10T05:17:58Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-10T05:17:35Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 275.24 +/- 24.12
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
xu1998hz/sescore_german_mt
|
xu1998hz
| 2022-11-10T03:59:25Z | 0 | 1 | null |
[
"region:us"
] | null | 2022-11-05T01:44:41Z |
SEScore German checkpoint for Machine Translation
|
irfan-noordin/segformer-b0-finetuned-segments-sidewalk-oct-22
|
irfan-noordin
| 2022-11-10T02:23:44Z | 157 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2022-11-09T06:58:03Z |
---
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-sidewalk-oct-22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-oct-22
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9249
- Mean Iou: 0.1675
- Mean Accuracy: 0.2109
- Overall Accuracy: 0.7776
- Accuracy Unlabeled: nan
- Accuracy Flat-road: 0.8631
- Accuracy Flat-sidewalk: 0.9423
- Accuracy Flat-crosswalk: 0.0
- Accuracy Flat-cyclinglane: 0.4704
- Accuracy Flat-parkingdriveway: 0.1421
- Accuracy Flat-railtrack: 0.0
- Accuracy Flat-curb: 0.0061
- Accuracy Human-person: 0.0
- Accuracy Human-rider: 0.0
- Accuracy Vehicle-car: 0.8937
- Accuracy Vehicle-truck: 0.0
- Accuracy Vehicle-bus: 0.0
- Accuracy Vehicle-tramtrain: 0.0
- Accuracy Vehicle-motorcycle: 0.0
- Accuracy Vehicle-bicycle: 0.0
- Accuracy Vehicle-caravan: 0.0
- Accuracy Vehicle-cartrailer: 0.0
- Accuracy Construction-building: 0.9143
- Accuracy Construction-door: 0.0
- Accuracy Construction-wall: 0.0055
- Accuracy Construction-fenceguardrail: 0.0
- Accuracy Construction-bridge: 0.0
- Accuracy Construction-tunnel: nan
- Accuracy Construction-stairs: 0.0
- Accuracy Object-pole: 0.0
- Accuracy Object-trafficsign: 0.0
- Accuracy Object-trafficlight: 0.0
- Accuracy Nature-vegetation: 0.9291
- Accuracy Nature-terrain: 0.8710
- Accuracy Sky: 0.9207
- Accuracy Void-ground: 0.0
- Accuracy Void-dynamic: 0.0
- Accuracy Void-static: 0.0
- Accuracy Void-unclear: 0.0
- Iou Unlabeled: nan
- Iou Flat-road: 0.6127
- Iou Flat-sidewalk: 0.8192
- Iou Flat-crosswalk: 0.0
- Iou Flat-cyclinglane: 0.4256
- Iou Flat-parkingdriveway: 0.1262
- Iou Flat-railtrack: 0.0
- Iou Flat-curb: 0.0061
- Iou Human-person: 0.0
- Iou Human-rider: 0.0
- Iou Vehicle-car: 0.6655
- Iou Vehicle-truck: 0.0
- Iou Vehicle-bus: 0.0
- Iou Vehicle-tramtrain: 0.0
- Iou Vehicle-motorcycle: 0.0
- Iou Vehicle-bicycle: 0.0
- Iou Vehicle-caravan: 0.0
- Iou Vehicle-cartrailer: 0.0
- Iou Construction-building: 0.5666
- Iou Construction-door: 0.0
- Iou Construction-wall: 0.0054
- Iou Construction-fenceguardrail: 0.0
- Iou Construction-bridge: 0.0
- Iou Construction-tunnel: nan
- Iou Construction-stairs: 0.0
- Iou Object-pole: 0.0
- Iou Object-trafficsign: 0.0
- Iou Object-trafficlight: 0.0
- Iou Nature-vegetation: 0.7875
- Iou Nature-terrain: 0.6912
- Iou Sky: 0.8218
- Iou Void-ground: 0.0
- Iou Void-dynamic: 0.0
- Iou Void-static: 0.0
- Iou Void-unclear: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Flat-road | Accuracy Flat-sidewalk | Accuracy Flat-crosswalk | Accuracy Flat-cyclinglane | Accuracy Flat-parkingdriveway | Accuracy Flat-railtrack | Accuracy Flat-curb | Accuracy Human-person | Accuracy Human-rider | Accuracy Vehicle-car | Accuracy Vehicle-truck | Accuracy Vehicle-bus | Accuracy Vehicle-tramtrain | Accuracy Vehicle-motorcycle | Accuracy Vehicle-bicycle | Accuracy Vehicle-caravan | Accuracy Vehicle-cartrailer | Accuracy Construction-building | Accuracy Construction-door | Accuracy Construction-wall | Accuracy Construction-fenceguardrail | Accuracy Construction-bridge | Accuracy Construction-tunnel | Accuracy Construction-stairs | Accuracy Object-pole | Accuracy Object-trafficsign | Accuracy Object-trafficlight | Accuracy Nature-vegetation | Accuracy Nature-terrain | Accuracy Sky | Accuracy Void-ground | Accuracy Void-dynamic | Accuracy Void-static | Accuracy Void-unclear | Iou Unlabeled | Iou Flat-road | Iou Flat-sidewalk | Iou Flat-crosswalk | Iou Flat-cyclinglane | Iou Flat-parkingdriveway | Iou Flat-railtrack | Iou Flat-curb | Iou Human-person | Iou Human-rider | Iou Vehicle-car | Iou Vehicle-truck | Iou Vehicle-bus | Iou Vehicle-tramtrain | Iou Vehicle-motorcycle | Iou Vehicle-bicycle | Iou Vehicle-caravan | Iou Vehicle-cartrailer | Iou Construction-building | Iou Construction-door | Iou Construction-wall | Iou Construction-fenceguardrail | Iou Construction-bridge | Iou Construction-tunnel | Iou Construction-stairs | Iou Object-pole | Iou Object-trafficsign | Iou Object-trafficlight | Iou Nature-vegetation | Iou Nature-terrain | Iou Sky | Iou Void-ground | Iou Void-dynamic | Iou Void-static | Iou Void-unclear |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:------------------:|:----------------------:|:-----------------------:|:-------------------------:|:-----------------------------:|:-----------------------:|:------------------:|:---------------------:|:--------------------:|:--------------------:|:----------------------:|:--------------------:|:--------------------------:|:---------------------------:|:------------------------:|:------------------------:|:---------------------------:|:------------------------------:|:--------------------------:|:--------------------------:|:------------------------------------:|:----------------------------:|:----------------------------:|:----------------------------:|:--------------------:|:---------------------------:|:----------------------------:|:--------------------------:|:-----------------------:|:------------:|:--------------------:|:---------------------:|:--------------------:|:---------------------:|:-------------:|:-------------:|:-----------------:|:------------------:|:--------------------:|:------------------------:|:------------------:|:-------------:|:----------------:|:---------------:|:---------------:|:-----------------:|:---------------:|:---------------------:|:----------------------:|:-------------------:|:-------------------:|:----------------------:|:-------------------------:|:---------------------:|:---------------------:|:-------------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:---------------:|:----------------------:|:-----------------------:|:---------------------:|:------------------:|:-------:|:---------------:|:----------------:|:---------------:|:----------------:|
| 2.832 | 0.05 | 20 | 3.1768 | 0.0700 | 0.1095 | 0.5718 | nan | 0.1365 | 0.9472 | 0.0019 | 0.0006 | 0.0004 | 0.0 | 0.0205 | 0.0 | 0.0 | 0.2074 | 0.0 | 0.0 | 0.0 | 0.0017 | 0.0001 | 0.0 | 0.0 | 0.7360 | 0.0 | 0.0235 | 0.0050 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9559 | 0.0429 | 0.5329 | 0.0010 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1260 | 0.5906 | 0.0016 | 0.0006 | 0.0004 | 0.0 | 0.0175 | 0.0 | 0.0 | 0.2006 | 0.0 | 0.0 | 0.0 | 0.0003 | 0.0001 | 0.0 | 0.0 | 0.3729 | 0.0 | 0.0209 | 0.0044 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5778 | 0.0408 | 0.4932 | 0.0009 | 0.0 | 0.0 | 0.0 |
| 2.3224 | 0.1 | 40 | 2.4686 | 0.0885 | 0.1321 | 0.6347 | nan | 0.5225 | 0.9260 | 0.0005 | 0.0001 | 0.0006 | 0.0 | 0.0113 | 0.0 | 0.0 | 0.3738 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8191 | 0.0 | 0.0263 | 0.0012 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9649 | 0.0701 | 0.6434 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4240 | 0.6602 | 0.0005 | 0.0001 | 0.0006 | 0.0 | 0.0109 | 0.0 | 0.0 | 0.3292 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3962 | 0.0 | 0.0260 | 0.0011 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6019 | 0.0617 | 0.5862 | 0.0001 | 0.0 | 0.0 | 0.0 |
| 2.1961 | 0.15 | 60 | 1.9886 | 0.0988 | 0.1431 | 0.6500 | nan | 0.5168 | 0.9319 | 0.0 | 0.0001 | 0.0000 | 0.0 | 0.0032 | 0.0 | 0.0 | 0.5761 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8325 | 0.0 | 0.0132 | 0.0003 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9612 | 0.1260 | 0.7625 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.3929 | 0.6721 | 0.0 | 0.0001 | 0.0000 | 0.0 | 0.0032 | 0.0 | 0.0 | 0.4609 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4375 | 0.0 | 0.0131 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6342 | 0.1108 | 0.6353 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.2964 | 0.2 | 80 | 2.0597 | 0.1066 | 0.1503 | 0.6682 | nan | 0.6577 | 0.9207 | 0.0 | 0.0000 | 0.0002 | 0.0 | 0.0044 | 0.0 | 0.0 | 0.5257 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8466 | 0.0 | 0.0094 | 0.0001 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9526 | 0.2022 | 0.8392 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.4276 | 0.7093 | 0.0 | 0.0000 | 0.0002 | 0.0 | 0.0044 | 0.0 | 0.0 | 0.4438 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4488 | 0.0 | 0.0093 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6560 | 0.1833 | 0.7408 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.9751 | 0.25 | 100 | 1.7493 | 0.1186 | 0.1645 | 0.6944 | nan | 0.7604 | 0.9146 | 0.0 | 0.0004 | 0.0012 | 0.0 | 0.0016 | 0.0 | 0.0 | 0.7381 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8273 | 0.0 | 0.0026 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9636 | 0.3289 | 0.8909 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.4904 | 0.7490 | 0.0 | 0.0004 | 0.0012 | 0.0 | 0.0016 | 0.0 | 0.0 | 0.5465 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4913 | 0.0 | 0.0026 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6542 | 0.2761 | 0.7004 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.7626 | 0.3 | 120 | 1.5608 | 0.1295 | 0.1752 | 0.7118 | nan | 0.8168 | 0.9102 | 0.0 | 0.0002 | 0.0025 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.8094 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8362 | 0.0 | 0.0030 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9492 | 0.5677 | 0.8861 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.4958 | 0.7592 | 0.0 | 0.0002 | 0.0025 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.5680 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5095 | 0.0 | 0.0030 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7082 | 0.4878 | 0.7392 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.32 | 0.35 | 140 | 1.5048 | 0.1323 | 0.1797 | 0.7181 | nan | 0.7883 | 0.9260 | 0.0 | 0.0000 | 0.0037 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.8711 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8590 | 0.0 | 0.0022 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9128 | 0.7088 | 0.8576 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5141 | 0.7598 | 0.0 | 0.0000 | 0.0037 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.5287 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5016 | 0.0 | 0.0022 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7458 | 0.5602 | 0.7499 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.6464 | 0.4 | 160 | 1.3886 | 0.1342 | 0.1783 | 0.7217 | nan | 0.7859 | 0.9390 | 0.0 | 0.0 | 0.0059 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7401 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8508 | 0.0 | 0.0010 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9368 | 0.7223 | 0.9025 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5173 | 0.7561 | 0.0 | 0.0 | 0.0058 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5846 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5059 | 0.0 | 0.0010 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7366 | 0.5802 | 0.7401 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4757 | 0.45 | 180 | 1.3649 | 0.1367 | 0.1840 | 0.7255 | nan | 0.8587 | 0.9185 | 0.0 | 0.0001 | 0.0039 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8588 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8337 | 0.0 | 0.0014 | 0.0001 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9036 | 0.7809 | 0.9138 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5077 | 0.7693 | 0.0 | 0.0001 | 0.0039 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5980 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5264 | 0.0 | 0.0014 | 0.0001 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7521 | 0.6078 | 0.7438 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.0018 | 0.5 | 200 | 1.3118 | 0.1353 | 0.1839 | 0.7242 | nan | 0.7797 | 0.9457 | 0.0 | 0.0029 | 0.0057 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8345 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8509 | 0.0 | 0.0018 | 0.0001 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8704 | 0.8688 | 0.9069 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5321 | 0.7602 | 0.0 | 0.0029 | 0.0057 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6060 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5276 | 0.0 | 0.0018 | 0.0001 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7133 | 0.5551 | 0.7593 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4636 | 0.55 | 220 | 1.2729 | 0.1330 | 0.1797 | 0.7249 | nan | 0.8619 | 0.9203 | 0.0 | 0.0015 | 0.0067 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8903 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8514 | 0.0 | 0.0031 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9447 | 0.5448 | 0.9040 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5249 | 0.7844 | 0.0 | 0.0015 | 0.0066 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5735 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5336 | 0.0 | 0.0031 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7136 | 0.4869 | 0.7613 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.1856 | 0.6 | 240 | 1.2551 | 0.1382 | 0.1828 | 0.7274 | nan | 0.7497 | 0.9518 | 0.0 | 0.0005 | 0.0048 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8893 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8153 | 0.0 | 0.0048 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9475 | 0.7597 | 0.9107 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5097 | 0.7477 | 0.0 | 0.0005 | 0.0047 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6172 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5527 | 0.0 | 0.0048 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7293 | 0.6250 | 0.7703 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4577 | 0.65 | 260 | 1.1862 | 0.1387 | 0.1848 | 0.7304 | nan | 0.8842 | 0.9065 | 0.0 | 0.0001 | 0.0024 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8566 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8632 | 0.0 | 0.0006 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9442 | 0.7313 | 0.9080 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5121 | 0.7833 | 0.0 | 0.0001 | 0.0024 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6297 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5381 | 0.0 | 0.0006 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7437 | 0.6199 | 0.7486 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.0748 | 0.7 | 280 | 1.2000 | 0.1391 | 0.1846 | 0.7301 | nan | 0.7249 | 0.9690 | 0.0 | 0.0005 | 0.0064 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8909 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8656 | 0.0 | 0.0014 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8917 | 0.8362 | 0.9065 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5306 | 0.7403 | 0.0 | 0.0005 | 0.0063 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6223 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5491 | 0.0 | 0.0014 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7566 | 0.6061 | 0.7761 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.642 | 0.75 | 300 | 1.1452 | 0.1432 | 0.1880 | 0.7409 | nan | 0.8682 | 0.9389 | 0.0 | 0.0030 | 0.0062 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8605 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8759 | 0.0 | 0.0020 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9092 | 0.8515 | 0.8892 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5333 | 0.7905 | 0.0 | 0.0030 | 0.0062 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6393 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5418 | 0.0 | 0.0020 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7655 | 0.6551 | 0.7893 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.2166 | 0.8 | 320 | 1.1450 | 0.1388 | 0.1849 | 0.7391 | nan | 0.8516 | 0.9460 | 0.0 | 0.0043 | 0.0060 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.8944 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8803 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9283 | 0.6849 | 0.9071 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5584 | 0.7932 | 0.0 | 0.0043 | 0.0060 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.5844 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5259 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7548 | 0.5985 | 0.7549 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.1346 | 0.85 | 340 | 1.1215 | 0.1428 | 0.1887 | 0.7411 | nan | 0.7956 | 0.9551 | 0.0 | 0.0145 | 0.0098 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.8646 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8884 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9131 | 0.8828 | 0.9024 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5611 | 0.7721 | 0.0 | 0.0145 | 0.0097 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.6313 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5405 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7563 | 0.6337 | 0.7917 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.8351 | 0.9 | 360 | 1.1012 | 0.1433 | 0.1896 | 0.7449 | nan | 0.8723 | 0.9432 | 0.0 | 0.0025 | 0.0114 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8822 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8662 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9213 | 0.8361 | 0.9201 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5472 | 0.7989 | 0.0 | 0.0025 | 0.0113 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6277 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5416 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7666 | 0.6674 | 0.7664 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.152 | 0.95 | 380 | 1.1045 | 0.1452 | 0.1891 | 0.7453 | nan | 0.8827 | 0.9332 | 0.0 | 0.0457 | 0.0124 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8396 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8848 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9399 | 0.7910 | 0.9107 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5462 | 0.7966 | 0.0 | 0.0457 | 0.0123 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6494 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5395 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7636 | 0.6627 | 0.7763 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.2062 | 1.0 | 400 | 1.0607 | 0.1469 | 0.1897 | 0.7482 | nan | 0.8192 | 0.9644 | 0.0 | 0.0944 | 0.0198 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8406 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8821 | 0.0 | 0.0006 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9193 | 0.8054 | 0.9137 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5772 | 0.7742 | 0.0 | 0.0941 | 0.0195 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6414 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5360 | 0.0 | 0.0006 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7740 | 0.6591 | 0.7710 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.0116 | 1.05 | 420 | 1.0503 | 0.1493 | 0.1950 | 0.7554 | nan | 0.8686 | 0.9478 | 0.0 | 0.2033 | 0.0295 | 0.0 | 0.0 | 0.0 | 0.0 | 0.9166 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8409 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9414 | 0.7667 | 0.9196 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5809 | 0.8022 | 0.0 | 0.1995 | 0.0287 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5916 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5517 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7628 | 0.6441 | 0.7652 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.009 | 1.1 | 440 | 1.0723 | 0.1529 | 0.1958 | 0.7553 | nan | 0.7797 | 0.9670 | 0.0 | 0.2214 | 0.0547 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.8978 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8927 | 0.0 | 0.0000 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9274 | 0.8016 | 0.9176 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5898 | 0.7717 | 0.0 | 0.2157 | 0.0526 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.6389 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5499 | 0.0 | 0.0000 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7760 | 0.6697 | 0.7818 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.1496 | 1.15 | 460 | 1.0417 | 0.1571 | 0.2017 | 0.7607 | nan | 0.7736 | 0.9645 | 0.0 | 0.3606 | 0.0669 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.8775 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8801 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9098 | 0.8906 | 0.9326 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6102 | 0.7737 | 0.0 | 0.3374 | 0.0634 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.6549 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5538 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7682 | 0.6437 | 0.7772 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4669 | 1.2 | 480 | 1.0161 | 0.1566 | 0.2024 | 0.7637 | nan | 0.8236 | 0.9531 | 0.0 | 0.3507 | 0.0584 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.9165 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8675 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9263 | 0.8597 | 0.9222 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6005 | 0.7983 | 0.0 | 0.3296 | 0.0556 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.6153 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5498 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7752 | 0.6654 | 0.7770 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.075 | 1.25 | 500 | 1.0124 | 0.1556 | 0.2000 | 0.7634 | nan | 0.8521 | 0.9499 | 0.0 | 0.3154 | 0.0410 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.8944 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8618 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9442 | 0.8133 | 0.9290 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5910 | 0.8068 | 0.0 | 0.2992 | 0.0394 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.6338 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5507 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7689 | 0.6697 | 0.7737 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.888 | 1.3 | 520 | 0.9797 | 0.1597 | 0.2028 | 0.7677 | nan | 0.8590 | 0.9472 | 0.0 | 0.3534 | 0.0469 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.8900 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8807 | 0.0 | 0.0005 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9379 | 0.8578 | 0.9187 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5908 | 0.8056 | 0.0 | 0.3311 | 0.0448 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.6598 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5676 | 0.0 | 0.0005 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7712 | 0.6912 | 0.8088 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.8099 | 1.35 | 540 | 0.9760 | 0.1589 | 0.2026 | 0.7678 | nan | 0.8526 | 0.9534 | 0.0 | 0.3370 | 0.0313 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.9235 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8862 | 0.0 | 0.0005 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9252 | 0.8551 | 0.9206 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5954 | 0.8014 | 0.0 | 0.3188 | 0.0303 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.6382 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5706 | 0.0 | 0.0005 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7830 | 0.6934 | 0.8122 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.1998 | 1.4 | 560 | 0.9815 | 0.1578 | 0.2030 | 0.7631 | nan | 0.8956 | 0.9250 | 0.0 | 0.3267 | 0.0461 | 0.0 | 0.0004 | 0.0 | 0.0 | 0.8929 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8956 | 0.0 | 0.0002 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9206 | 0.8669 | 0.9275 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5656 | 0.8136 | 0.0 | 0.3102 | 0.0440 | 0.0 | 0.0004 | 0.0 | 0.0 | 0.6574 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5524 | 0.0 | 0.0002 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7894 | 0.6940 | 0.7818 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.5591 | 1.45 | 580 | 0.9654 | 0.1618 | 0.2043 | 0.7698 | nan | 0.8198 | 0.9655 | 0.0 | 0.3715 | 0.0848 | 0.0 | 0.0003 | 0.0 | 0.0 | 0.8935 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8965 | 0.0 | 0.0013 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9146 | 0.8730 | 0.9198 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6182 | 0.7898 | 0.0 | 0.3467 | 0.0792 | 0.0 | 0.0003 | 0.0 | 0.0 | 0.6590 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5647 | 0.0 | 0.0013 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7871 | 0.6835 | 0.8101 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.861 | 1.5 | 600 | 0.9622 | 0.1607 | 0.2045 | 0.7689 | nan | 0.8163 | 0.9648 | 0.0 | 0.3780 | 0.0907 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.9187 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8714 | 0.0 | 0.0006 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9229 | 0.8485 | 0.9361 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6180 | 0.7903 | 0.0 | 0.3541 | 0.0844 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.6307 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5609 | 0.0 | 0.0006 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7854 | 0.6904 | 0.7884 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.8335 | 1.55 | 620 | 0.9569 | 0.1598 | 0.2050 | 0.7686 | nan | 0.8421 | 0.9561 | 0.0 | 0.3493 | 0.0928 | 0.0 | 0.0012 | 0.0 | 0.0 | 0.9261 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8753 | 0.0 | 0.0013 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9172 | 0.8688 | 0.9335 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6069 | 0.8031 | 0.0 | 0.3306 | 0.0860 | 0.0 | 0.0012 | 0.0 | 0.0 | 0.6123 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5618 | 0.0 | 0.0013 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7851 | 0.6911 | 0.7950 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.9988 | 1.6 | 640 | 0.9337 | 0.1611 | 0.2050 | 0.7711 | nan | 0.8595 | 0.9538 | 0.0 | 0.3512 | 0.0928 | 0.0 | 0.0006 | 0.0 | 0.0 | 0.8962 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8854 | 0.0 | 0.0004 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9281 | 0.8594 | 0.9367 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6062 | 0.8105 | 0.0 | 0.3310 | 0.0868 | 0.0 | 0.0006 | 0.0 | 0.0 | 0.6565 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5596 | 0.0 | 0.0004 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7819 | 0.6958 | 0.7880 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.966 | 1.65 | 660 | 0.9322 | 0.1612 | 0.2051 | 0.7707 | nan | 0.8706 | 0.9494 | 0.0 | 0.3470 | 0.0997 | 0.0 | 0.0005 | 0.0 | 0.0 | 0.8905 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8722 | 0.0 | 0.0016 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9347 | 0.8652 | 0.9364 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5953 | 0.8136 | 0.0 | 0.3281 | 0.0922 | 0.0 | 0.0005 | 0.0 | 0.0 | 0.6654 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5696 | 0.0 | 0.0016 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7756 | 0.6890 | 0.7885 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.2154 | 1.7 | 680 | 0.9373 | 0.1611 | 0.2048 | 0.7710 | nan | 0.8448 | 0.9577 | 0.0 | 0.3717 | 0.1010 | 0.0 | 0.0007 | 0.0 | 0.0 | 0.9173 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8613 | 0.0 | 0.0026 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9411 | 0.8371 | 0.9246 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6096 | 0.8056 | 0.0 | 0.3487 | 0.0930 | 0.0 | 0.0007 | 0.0 | 0.0 | 0.6272 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5696 | 0.0 | 0.0026 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7762 | 0.6911 | 0.7931 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.7979 | 1.75 | 700 | 0.9429 | 0.1622 | 0.2067 | 0.7717 | nan | 0.8496 | 0.9548 | 0.0 | 0.3821 | 0.1182 | 0.0 | 0.0013 | 0.0 | 0.0 | 0.9071 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8803 | 0.0 | 0.0043 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9202 | 0.8812 | 0.9204 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6104 | 0.8088 | 0.0 | 0.3583 | 0.1074 | 0.0 | 0.0013 | 0.0 | 0.0 | 0.6410 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5675 | 0.0 | 0.0043 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7784 | 0.6767 | 0.7994 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.8366 | 1.8 | 720 | 0.9379 | 0.1645 | 0.2075 | 0.7745 | nan | 0.8359 | 0.9580 | 0.0 | 0.4130 | 0.1275 | 0.0 | 0.0021 | 0.0 | 0.0 | 0.8998 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8704 | 0.0 | 0.0088 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0 | 0.0 | 0.9450 | 0.8617 | 0.9251 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6227 | 0.8035 | 0.0 | 0.3850 | 0.1147 | 0.0 | 0.0021 | 0.0 | 0.0 | 0.6544 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5777 | 0.0 | 0.0088 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0 | 0.0 | 0.7682 | 0.6867 | 0.8055 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.0448 | 1.85 | 740 | 0.9419 | 0.1659 | 0.2087 | 0.7769 | nan | 0.8483 | 0.9532 | 0.0 | 0.4442 | 0.1387 | 0.0 | 0.0028 | 0.0 | 0.0 | 0.8986 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8865 | 0.0 | 0.0042 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9458 | 0.8442 | 0.9215 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6240 | 0.8122 | 0.0 | 0.4077 | 0.1237 | 0.0 | 0.0028 | 0.0 | 0.0 | 0.6529 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5700 | 0.0 | 0.0041 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7767 | 0.6938 | 0.8070 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.9737 | 1.9 | 760 | 0.9193 | 0.1664 | 0.2082 | 0.7772 | nan | 0.8420 | 0.9586 | 0.0 | 0.4353 | 0.1193 | 0.0 | 0.0010 | 0.0 | 0.0 | 0.9082 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8955 | 0.0 | 0.0079 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9385 | 0.8464 | 0.9190 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6232 | 0.8053 | 0.0 | 0.4022 | 0.1088 | 0.0 | 0.0010 | 0.0 | 0.0 | 0.6549 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5766 | 0.0 | 0.0079 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7843 | 0.7077 | 0.8180 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.0716 | 1.95 | 780 | 0.9170 | 0.1672 | 0.2098 | 0.7785 | nan | 0.8434 | 0.9539 | 0.0 | 0.4671 | 0.1283 | 0.0 | 0.0037 | 0.0 | 0.0 | 0.9012 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8984 | 0.0 | 0.0058 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9398 | 0.8661 | 0.9157 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6242 | 0.8106 | 0.0 | 0.4232 | 0.1156 | 0.0 | 0.0037 | 0.0 | 0.0 | 0.6631 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5777 | 0.0 | 0.0057 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7811 | 0.6920 | 0.8223 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4144 | 2.0 | 800 | 0.9249 | 0.1675 | 0.2109 | 0.7776 | nan | 0.8631 | 0.9423 | 0.0 | 0.4704 | 0.1421 | 0.0 | 0.0061 | 0.0 | 0.0 | 0.8937 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.9143 | 0.0 | 0.0055 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9291 | 0.8710 | 0.9207 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6127 | 0.8192 | 0.0 | 0.4256 | 0.1262 | 0.0 | 0.0061 | 0.0 | 0.0 | 0.6655 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5666 | 0.0 | 0.0054 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7875 | 0.6912 | 0.8218 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.6.1
- Tokenizers 0.12.1
|
burakyldrm/wav2vec2-burak-new-300-v2-6
|
burakyldrm
| 2022-11-10T01:45:10Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-09T19:25:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-burak-new-300-v2-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-burak-new-300-v2-6
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3074
- Wer: 0.2340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 151
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 6.3136 | 9.61 | 500 | 3.1262 | 1.0 |
| 1.8247 | 19.23 | 1000 | 0.4049 | 0.5065 |
| 0.5387 | 28.83 | 1500 | 0.2828 | 0.3462 |
| 0.3713 | 38.45 | 2000 | 0.2761 | 0.3125 |
| 0.293 | 48.08 | 2500 | 0.2872 | 0.3001 |
| 0.2436 | 57.68 | 3000 | 0.2912 | 0.2904 |
| 0.2116 | 67.3 | 3500 | 0.2910 | 0.2725 |
| 0.1859 | 76.91 | 4000 | 0.2937 | 0.2533 |
| 0.1731 | 86.53 | 4500 | 0.2985 | 0.2485 |
| 0.1569 | 96.15 | 5000 | 0.3022 | 0.2409 |
| 0.1471 | 105.76 | 5500 | 0.3070 | 0.2374 |
| 0.1385 | 115.38 | 6000 | 0.2954 | 0.2429 |
| 0.1289 | 124.99 | 6500 | 0.3016 | 0.2361 |
| 0.1268 | 134.61 | 7000 | 0.3000 | 0.2368 |
| 0.12 | 144.23 | 7500 | 0.3074 | 0.2340 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
sanchit-gandhi/whisper-medium-es-5k
|
sanchit-gandhi
| 2022-11-10T01:33:57Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"es",
"dataset:facebook/multilingual_librispeech",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-09T19:30:55Z |
---
language:
- es
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- facebook/multilingual_librispeech
metrics:
- wer
model-index:
- name: Whisper Small Es - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Multilingual LibriSpeech
type: facebook/multilingual_librispeech
args: 'config: es, split: test'
metrics:
- name: Wer
type: wer
value: 60.16226172047142
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Es - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Multilingual LibriSpeech dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2668
- Wer: 60.1623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 2.2112 | 0.2 | 500 | 1.7394 | 61.1126 |
| 1.4913 | 0.4 | 1000 | 1.3758 | 62.8143 |
| 1.6651 | 0.6 | 1500 | 1.3100 | 61.3261 |
| 1.7031 | 0.8 | 2000 | 1.2752 | 60.5261 |
| 1.4289 | 1.0 | 2500 | 1.2668 | 60.1623 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.0
- Datasets 2.6.2.dev0
- Tokenizers 0.12.1
|
hcho22/opus-mt-ko-en-finetuned-kr-to-en
|
hcho22
| 2022-11-10T00:23:13Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-08T18:23:44Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: hcho22/opus-mt-ko-en-finetuned-kr-to-en
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hcho22/opus-mt-ko-en-finetuned-kr-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ko-en](https://huggingface.co/Helsinki-NLP/opus-mt-ko-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2330
- Validation Loss: 1.2844
- Train Bleu: 30.7578
- Train Gen Len: 13.9104
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Bleu | Train Gen Len | Epoch |
|:----------:|:---------------:|:----------:|:-------------:|:-----:|
| 1.2330 | 1.2844 | 30.7578 | 13.9104 | 0 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
ntsema/wav2vec2-xlsr-53-espeak-cv-ft-mhr-ntsema-colab
|
ntsema
| 2022-11-10T00:12:21Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:audiofolder",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-07T17:02:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- wer
model-index:
- name: wav2vec2-xlsr-53-espeak-cv-ft-mhr-ntsema-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: audiofolder
type: audiofolder
config: default
split: train
args: default
metrics:
- name: Wer
type: wer
value: 0.8127090301003345
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-53-espeak-cv-ft-mhr-ntsema-colab
This model is a fine-tuned version of [facebook/wav2vec2-xlsr-53-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-xlsr-53-espeak-cv-ft) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7728
- Wer: 0.8127
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8463 | 5.79 | 400 | 1.0428 | 0.9331 |
| 1.4576 | 11.59 | 800 | 0.6796 | 0.8495 |
| 0.8054 | 17.39 | 1200 | 0.7131 | 0.8227 |
| 0.4946 | 23.19 | 1600 | 0.7202 | 0.8194 |
| 0.3157 | 28.98 | 2000 | 0.7728 | 0.8127 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.14.0.dev20221107+cu116
- Datasets 2.6.1
- Tokenizers 0.13.2
|
alexionby/clip-roberta-finetuned
|
alexionby
| 2022-11-09T23:36:21Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-text-dual-encoder",
"feature-extraction",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-09T21:35:52Z |
---
tags:
- generated_from_trainer
model-index:
- name: clip-roberta-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-roberta-finetuned
This model is a fine-tuned version of [./clip-roberta](https://huggingface.co/./clip-roberta) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.10.2
- Datasets 2.6.1
- Tokenizers 0.12.1
|
giulio98/codegen-350M-multi-xlcost-v2
|
giulio98
| 2022-11-09T23:22:53Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"codegen",
"text-generation",
"code",
"gpt2",
"generation",
"dataset:giulio98/xlcost-single-prompt",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-09T13:56:56Z |
---
language: code
tags:
- code
- gpt2
- generation
datasets:
- giulio98/xlcost-single-prompt
widget:
- text: "'''\nfunction to add two numbers\n'''\n###\n"
example_title: "add two numbers"
model-index:
- name: codegen-350M-multi-xlcost
results:
- task:
name: Code Generation
type: code-generation
dataset:
name: "XLCost"
type: code_eval_outputs
metrics:
- name: pass@1
type: code_eval_outputs
value: 3.325
- name: pass@10
type: code_eval_outputs
value: 15
- name: codebleu
type: codebleu
value: 20.18191
---
# CodeGen-350M-multi-xlcost-v2
CodeGen-350M-multi-xlcost is a CodeGen model fine-tuned on the Python split of XLCost dataset using Deepspeed.
## Usage
You can load the CodeGen-350M-multi-xlcost-v2 model and tokenizer directly in `transformers`:
```Python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("giulio98/codegen-350M-multi-xlcost-v2")
model = AutoModelForCausalLM.from_pretrained("giulio98/codegen-350M-multi-xlcost-v2")
text = tokenizer.eos_token + "\'\'\'\n" + "function to add two numbers" + "\n\'\'\'\n" + "###\n"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
Output:
```Python
'''
function to add two numbers
'''
###
def add(a, b):
return a + b
```
## Training
The model was finetuned on [XLCost-single-prompt](https://huggingface.co/datasets/giulio98/xlcost-single-prompt), an improved version of the original XLCost dataset [
xlcost-text-to-code](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code). Below the hyperparameters.
| Hyperparameter | value |
|---------------------------|--------|
|Per device train batch size| 16 |
|Context size| 1024 |
|Training steps| 259|
|Gradient accumulation| 2|
|Gradient checkpointing| True|
|Learning rate|1.8e-05 |
|Weight decay | 0.1 |
|Warmup steps| 35 |
|Schedule| linear |
|zero stage| 2 |
Below the deepspeed configuration
```Python
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 0.000018,
"betas": [
0.9,
0.999
],
"eps": 1e-8,
"weight_decay": 0.1
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 0.000018,
"warmup_num_steps": 35
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": false
},
"allgather_partitions": true,
"allgather_bucket_size": 200000000,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 200000000,
"contiguous_gradients": true
},
"gradient_accumulation_steps": 2,
"train_batch_size": 32,
"train_micro_batch_size_per_gpu": 16,
"gradient_clipping": 1,
"wall_clock_breakdown": false
}
```
The training was executed on 1 x V100 (16GB) GPU for 28min 50sec
## Performance
We evaluated the model on the first 400 samples of XLCOST's [XLCost-single-prompt test split](https://huggingface.co/datasets/giulio98/xlcost-single-prompt/viewer/Python/test) and comparing the outputs of the generated codes with respect to the expected output using pass@k metric.
| Metric | codegen-350M-multi-xlcost-v2 | codegen-350M-multi-xlcost | codegen-350M-mono(zero-shot) | codegen-350M-mono (one-shot) | codegen-350M-mono(few-shot)
|--------|-----|-----|-----|-----|-----|
|pass@1 |3.325% |3.70% | 0.4% | 0.35% | 0.48% |
|pass@10 |15%| 14.5% | 3.5% | 3 % | 3.75% |
|CodeBLEU |20.18%| None | 15.15% | 19.42 % | 20.27% |
The [pass@k metric](https://huggingface.co/metrics/code_eval) tells the probability that at least one out of k generations passes the tests.
## Citations
```
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
|
malay-patel/bert-ww-finetuned-squad
|
malay-patel
| 2022-11-09T23:20:25Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-09T07:19:23Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: malay-patel/bert-ww-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# malay-patel/bert-ww-finetuned-squad
This model is a fine-tuned version of [bert-large-cased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-cased-whole-word-masking-finetuned-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1766
- Train End Logits Accuracy: 0.9455
- Train Start Logits Accuracy: 0.9312
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16638, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:-----:|
| 0.5635 | 0.8374 | 0.7992 | 0 |
| 0.3369 | 0.8987 | 0.8695 | 1 |
| 0.1766 | 0.9455 | 0.9312 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Omerdor/dry_samples
|
Omerdor
| 2022-11-09T23:16:52Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-07T14:14:29Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# dry_samples
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Omerdor/dry_samples/tensorboard?#scalars)
|
gngpostalsrvc/BERiT_2000_enriched
|
gngpostalsrvc
| 2022-11-09T22:33:52Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-09T22:02:09Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: BERiT_2000_enriched
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERiT_2000_enriched
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.6052
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.786 | 0.19 | 500 | 6.6797 |
| 6.6441 | 0.39 | 1000 | 6.6574 |
| 6.6376 | 0.58 | 1500 | 6.6240 |
| 6.5951 | 0.77 | 2000 | 6.6291 |
| 6.6123 | 0.97 | 2500 | 6.6355 |
| 6.6028 | 1.16 | 3000 | 6.6084 |
| 6.5974 | 1.36 | 3500 | 6.5984 |
| 6.6104 | 1.55 | 4000 | 6.5775 |
| 6.6113 | 1.74 | 4500 | 6.6062 |
| 6.5895 | 1.94 | 5000 | 6.5931 |
| 6.6106 | 2.13 | 5500 | 6.6276 |
| 6.635 | 2.32 | 6000 | 6.5973 |
| 6.5694 | 2.52 | 6500 | 6.6021 |
| 6.612 | 2.71 | 7000 | 6.5882 |
| 6.5984 | 2.9 | 7500 | 6.6052 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
model-attribution-challenge/bert-base-cased
|
model-attribution-challenge
| 2022-11-09T22:24:46Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-09T20:13:56Z |
---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT base model (cased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is case-sensitive: it makes a difference between
english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-cased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] Hello I'm a fashion model. [SEP]",
'score': 0.09019174426794052,
'token': 4633,
'token_str': 'fashion'},
{'sequence': "[CLS] Hello I'm a new model. [SEP]",
'score': 0.06349995732307434,
'token': 1207,
'token_str': 'new'},
{'sequence': "[CLS] Hello I'm a male model. [SEP]",
'score': 0.06228214129805565,
'token': 2581,
'token_str': 'male'},
{'sequence': "[CLS] Hello I'm a professional model. [SEP]",
'score': 0.0441727414727211,
'token': 1848,
'token_str': 'professional'},
{'sequence': "[CLS] Hello I'm a super model. [SEP]",
'score': 0.03326151892542839,
'token': 7688,
'token_str': 'super'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
model = BertModel.from_pretrained("bert-base-cased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
model = TFBertModel.from_pretrained("bert-base-cased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-cased')
>>> unmasker("The man worked as a [MASK].")
[{'sequence': '[CLS] The man worked as a lawyer. [SEP]',
'score': 0.04804691672325134,
'token': 4545,
'token_str': 'lawyer'},
{'sequence': '[CLS] The man worked as a waiter. [SEP]',
'score': 0.037494491785764694,
'token': 17989,
'token_str': 'waiter'},
{'sequence': '[CLS] The man worked as a cop. [SEP]',
'score': 0.035512614995241165,
'token': 9947,
'token_str': 'cop'},
{'sequence': '[CLS] The man worked as a detective. [SEP]',
'score': 0.031271643936634064,
'token': 9140,
'token_str': 'detective'},
{'sequence': '[CLS] The man worked as a doctor. [SEP]',
'score': 0.027423162013292313,
'token': 3995,
'token_str': 'doctor'}]
>>> unmasker("The woman worked as a [MASK].")
[{'sequence': '[CLS] The woman worked as a nurse. [SEP]',
'score': 0.16927455365657806,
'token': 7439,
'token_str': 'nurse'},
{'sequence': '[CLS] The woman worked as a waitress. [SEP]',
'score': 0.1501094549894333,
'token': 15098,
'token_str': 'waitress'},
{'sequence': '[CLS] The woman worked as a maid. [SEP]',
'score': 0.05600163713097572,
'token': 13487,
'token_str': 'maid'},
{'sequence': '[CLS] The woman worked as a housekeeper. [SEP]',
'score': 0.04838843643665314,
'token': 26458,
'token_str': 'housekeeper'},
{'sequence': '[CLS] The woman worked as a cook. [SEP]',
'score': 0.029980547726154327,
'token': 9834,
'token_str': 'cook'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=bert-base-cased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
huggingtweets/wyld
|
huggingtweets
| 2022-11-09T22:18:30Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-09T22:02:59Z |
---
language: en
thumbnail: http://www.huggingtweets.com/wyld/1668032276555/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1547347036927696896/7JYzatqo_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Wyld</div>
<div style="text-align: center; font-size: 14px;">@wyld</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Wyld.
| Data | Wyld |
| --- | --- |
| Tweets downloaded | 3239 |
| Retweets | 601 |
| Short tweets | 574 |
| Tweets kept | 2064 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1fod497b/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wyld's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/lk8zcqu3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/lk8zcqu3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/wyld')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
kartikpalani/eai-setfit-model3
|
kartikpalani
| 2022-11-09T22:13:11Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-09T22:13:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3214 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 3214,
"warmup_steps": 322,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
sd-concepts-library/devonm
|
sd-concepts-library
| 2022-11-09T22:09:46Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-11-09T22:09:35Z |
---
license: mit
---
### DevonM on Stable Diffusion
This is the `<DevonM>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
model-attribution-challenge/bert-base-uncased
|
model-attribution-challenge
| 2022-11-09T22:02:03Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-09T20:14:45Z |
---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Model variations
BERT has originally been released in base and large variations, for cased and uncased input text. The uncased models also strips out an accent markers.
Chinese and multilingual uncased and cased versions followed shortly after.
Modified preprocessing with whole word masking has replaced subpiece masking in a following work, with the release of two models.
Other 24 smaller models are released afterward.
The detailed release history can be found on the [google-research/bert readme](https://github.com/google-research/bert/blob/master/README.md) on github.
| Model | #params | Language |
|------------------------|--------------------------------|-------|
| [`bert-base-uncased`](https://huggingface.co/bert-base-uncased) | 110M | English |
| [`bert-large-uncased`](https://huggingface.co/bert-large-uncased) | 340M | English | sub
| [`bert-base-cased`](https://huggingface.co/bert-base-cased) | 110M | English |
| [`bert-large-cased`](https://huggingface.co/bert-large-cased) | 340M | English |
| [`bert-base-chinese`](https://huggingface.co/bert-base-chinese) | 110M | Chinese |
| [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) | 110M | Multiple |
| [`bert-large-uncased-whole-word-masking`](https://huggingface.co/bert-large-uncased-whole-word-masking) | 340M | English |
| [`bert-large-cased-whole-word-masking`](https://huggingface.co/bert-large-cased-whole-word-masking) | 340M | English |
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions of a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.1073106899857521,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.08774490654468536,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a new model. [SEP]",
'score': 0.05338378623127937,
'token': 2047,
'token_str': 'new'},
{'sequence': "[CLS] hello i'm a super model. [SEP]",
'score': 0.04667217284440994,
'token': 3565,
'token_str': 'super'},
{'sequence': "[CLS] hello i'm a fine model. [SEP]",
'score': 0.027095865458250046,
'token': 2986,
'token_str': 'fine'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("The man worked as a [MASK].")
[{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
'score': 0.09747550636529922,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the man worked as a waiter. [SEP]',
'score': 0.0523831807076931,
'token': 15610,
'token_str': 'waiter'},
{'sequence': '[CLS] the man worked as a barber. [SEP]',
'score': 0.04962705448269844,
'token': 13362,
'token_str': 'barber'},
{'sequence': '[CLS] the man worked as a mechanic. [SEP]',
'score': 0.03788609802722931,
'token': 15893,
'token_str': 'mechanic'},
{'sequence': '[CLS] the man worked as a salesman. [SEP]',
'score': 0.037680890411138535,
'token': 18968,
'token_str': 'salesman'}]
>>> unmasker("The woman worked as a [MASK].")
[{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
'score': 0.21981462836265564,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the woman worked as a waitress. [SEP]',
'score': 0.1597415804862976,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the woman worked as a maid. [SEP]',
'score': 0.1154729500412941,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the woman worked as a prostitute. [SEP]',
'score': 0.037968918681144714,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the woman worked as a cook. [SEP]',
'score': 0.03042375110089779,
'token': 5660,
'token_str': 'cook'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus, and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
huggingtweets/bradsprigg
|
huggingtweets
| 2022-11-09T21:57:01Z | 98 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-09T21:48:16Z |
---
language: en
thumbnail: http://www.huggingtweets.com/bradsprigg/1668030722213/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1468456063775117312/6LimXaG6_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon's Musk (stinky boy)</div>
<div style="text-align: center; font-size: 14px;">@bradsprigg</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon's Musk (stinky boy).
| Data | Elon's Musk (stinky boy) |
| --- | --- |
| Tweets downloaded | 3224 |
| Retweets | 657 |
| Short tweets | 239 |
| Tweets kept | 2328 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2kr31b63/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bradsprigg's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3uyo0305) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3uyo0305/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bradsprigg')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Alred/distilbert-base-uncased-finetuned-squad-ver5
|
Alred
| 2022-11-09T21:29:48Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-09T21:13:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad-ver5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad-ver5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5572 | 1.0 | 554 | 1.5588 |
| 1.2784 | 2.0 | 1108 | 1.4776 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
fractalego/personal-whisper-small.en-model
|
fractalego
| 2022-11-09T21:22:35Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-06T18:39:24Z |
Personal speech to text model
-----------------------------
Speech to Text models often do not understand my accent, so I fine tuned this one from "openai/whisper-small.en" using about 1000 recordings of my voice, comprising of about 2h of recordings. The system goes from ~10% WER to 6% WER. A larger model would perform better but I need speed.
Do not download unless you have exactly my accent (North-East Italy).
|
Alred/distilbert-base-uncased-finetuned-squad-ver4
|
Alred
| 2022-11-09T21:13:37Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-09T20:05:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad-ver4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad-ver4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8147 | 1.0 | 554 | 1.6712 |
| 1.4844 | 2.0 | 1108 | 1.4681 |
| 1.0993 | 3.0 | 1662 | 1.4931 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
huggingtweets/bong_iverr
|
huggingtweets
| 2022-11-09T21:11:55Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-09T21:11:48Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1579688050300436480/Ou3iqmdl_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">carsonogenic</div>
<div style="text-align: center; font-size: 14px;">@bong_iverr</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from carsonogenic.
| Data | carsonogenic |
| --- | --- |
| Tweets downloaded | 726 |
| Retweets | 59 |
| Short tweets | 42 |
| Tweets kept | 625 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3oyq7g4j/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bong_iverr's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1jsj4h3w) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1jsj4h3w/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bong_iverr')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
gngpostalsrvc/BERiT_7000
|
gngpostalsrvc
| 2022-11-09T20:58:50Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-09T20:32:09Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: BERiT_7000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERiT_7000
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.5916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.9484 | 0.19 | 500 | 7.8474 |
| 7.7968 | 0.39 | 1000 | 7.7020 |
| 7.6992 | 0.58 | 1500 | 7.6949 |
| 7.656 | 0.77 | 2000 | 7.6922 |
| 7.68 | 0.97 | 2500 | 7.6863 |
| 7.5952 | 1.16 | 3000 | 7.6523 |
| 7.6441 | 1.36 | 3500 | 7.6523 |
| 7.6178 | 1.55 | 4000 | 7.6128 |
| 7.5977 | 1.74 | 4500 | 7.6556 |
| 7.6087 | 1.94 | 5000 | 7.5990 |
| 7.5734 | 2.13 | 5500 | 7.5997 |
| 7.566 | 2.32 | 6000 | 7.5961 |
| 7.5715 | 2.52 | 6500 | 7.5505 |
| 7.5604 | 2.71 | 7000 | 7.5788 |
| 7.5749 | 2.9 | 7500 | 7.5916 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
gngpostalsrvc/BERiT_14500
|
gngpostalsrvc
| 2022-11-09T20:04:25Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-09T19:36:45Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: BERiT_14500
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERiT_14500
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 8.0316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.3825 | 0.19 | 500 | 8.3006 |
| 8.2426 | 0.39 | 1000 | 8.2751 |
| 8.1622 | 0.58 | 1500 | 8.2504 |
| 8.1673 | 0.77 | 2000 | 8.1935 |
| 8.1597 | 0.97 | 2500 | 8.1928 |
| 8.0644 | 1.16 | 3000 | 8.1111 |
| 8.0724 | 1.36 | 3500 | 8.0820 |
| 8.0654 | 1.55 | 4000 | 8.0655 |
| 8.0649 | 1.74 | 4500 | 8.0896 |
| 8.051 | 1.94 | 5000 | 8.0838 |
| 8.0003 | 2.13 | 5500 | 8.0989 |
| 7.9795 | 2.32 | 6000 | 8.0729 |
| 7.9984 | 2.52 | 6500 | 8.0566 |
| 7.9935 | 2.71 | 7000 | 8.0757 |
| 7.9652 | 2.9 | 7500 | 8.0316 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
sandorscog/finetuning-sentiment-model-3000-samples
|
sandorscog
| 2022-11-09T19:42:36Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-18T04:16:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0826
- Accuracy: 0.9761
- Precision: 0.9727
- Recall: 0.9654
- F1: 0.9691
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Alred/distilbert-base-uncased-finetuned-squad-ver2
|
Alred
| 2022-11-09T19:22:13Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-09T19:00:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad-ver2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad-ver2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8695
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3919 | 1.0 | 554 | 1.5543 |
| 1.0864 | 2.0 | 1108 | 1.5114 |
| 0.5553 | 3.0 | 1662 | 1.8695 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
pig4431/TSE_DistilBERT_5E
|
pig4431
| 2022-11-09T19:16:01Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-09T19:14:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: TSE_DistilBERT_5E
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TSE_DistilBERT_5E
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3301
- Accuracy: 0.9333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6534 | 0.06 | 50 | 0.5269 | 0.8333 |
| 0.3926 | 0.12 | 100 | 0.2674 | 0.9133 |
| 0.275 | 0.17 | 150 | 0.2063 | 0.94 |
| 0.2341 | 0.23 | 200 | 0.1896 | 0.9333 |
| 0.2436 | 0.29 | 250 | 0.2132 | 0.9133 |
| 0.2561 | 0.35 | 300 | 0.2474 | 0.9 |
| 0.2536 | 0.4 | 350 | 0.2092 | 0.9267 |
| 0.2048 | 0.46 | 400 | 0.2135 | 0.92 |
| 0.2119 | 0.52 | 450 | 0.2382 | 0.9133 |
| 0.2152 | 0.58 | 500 | 0.2322 | 0.9267 |
| 0.2072 | 0.63 | 550 | 0.2182 | 0.9333 |
| 0.2134 | 0.69 | 600 | 0.2457 | 0.9133 |
| 0.2093 | 0.75 | 650 | 0.2476 | 0.92 |
| 0.2145 | 0.81 | 700 | 0.2489 | 0.9267 |
| 0.2191 | 0.87 | 750 | 0.2374 | 0.9267 |
| 0.2198 | 0.92 | 800 | 0.2347 | 0.92 |
| 0.2126 | 0.98 | 850 | 0.2015 | 0.9467 |
| 0.1373 | 1.04 | 900 | 0.2246 | 0.9467 |
| 0.1367 | 1.1 | 950 | 0.2875 | 0.9133 |
| 0.1726 | 1.15 | 1000 | 0.2641 | 0.94 |
| 0.1968 | 1.21 | 1050 | 0.2653 | 0.9333 |
| 0.1607 | 1.27 | 1100 | 0.2323 | 0.94 |
| 0.1437 | 1.33 | 1150 | 0.2900 | 0.9267 |
| 0.1707 | 1.38 | 1200 | 0.2430 | 0.94 |
| 0.1174 | 1.44 | 1250 | 0.2553 | 0.94 |
| 0.1662 | 1.5 | 1300 | 0.2442 | 0.9467 |
| 0.1374 | 1.56 | 1350 | 0.2365 | 0.9467 |
| 0.1632 | 1.61 | 1400 | 0.2794 | 0.9133 |
| 0.1558 | 1.67 | 1450 | 0.2428 | 0.94 |
| 0.1717 | 1.73 | 1500 | 0.2380 | 0.92 |
| 0.1301 | 1.79 | 1550 | 0.2006 | 0.94 |
| 0.1757 | 1.85 | 1600 | 0.2327 | 0.9467 |
| 0.1997 | 1.9 | 1650 | 0.2160 | 0.94 |
| 0.1611 | 1.96 | 1700 | 0.2797 | 0.92 |
| 0.1638 | 2.02 | 1750 | 0.2433 | 0.9333 |
| 0.1041 | 2.08 | 1800 | 0.2389 | 0.94 |
| 0.1172 | 2.13 | 1850 | 0.2381 | 0.9467 |
| 0.1332 | 2.19 | 1900 | 0.2650 | 0.94 |
| 0.1299 | 2.25 | 1950 | 0.2869 | 0.9333 |
| 0.0992 | 2.31 | 2000 | 0.2308 | 0.9533 |
| 0.1012 | 2.36 | 2050 | 0.2552 | 0.9467 |
| 0.0948 | 2.42 | 2100 | 0.2823 | 0.9267 |
| 0.1081 | 2.48 | 2150 | 0.2634 | 0.9467 |
| 0.1157 | 2.54 | 2200 | 0.2864 | 0.9333 |
| 0.1154 | 2.6 | 2250 | 0.2987 | 0.9267 |
| 0.1259 | 2.65 | 2300 | 0.2879 | 0.9333 |
| 0.1084 | 2.71 | 2350 | 0.2661 | 0.94 |
| 0.1342 | 2.77 | 2400 | 0.2711 | 0.94 |
| 0.12 | 2.83 | 2450 | 0.2362 | 0.9467 |
| 0.0839 | 2.88 | 2500 | 0.2712 | 0.9333 |
| 0.1546 | 2.94 | 2550 | 0.2433 | 0.9467 |
| 0.1321 | 3.0 | 2600 | 0.2421 | 0.9467 |
| 0.101 | 3.06 | 2650 | 0.2820 | 0.9333 |
| 0.061 | 3.11 | 2700 | 0.2990 | 0.9267 |
| 0.0608 | 3.17 | 2750 | 0.2512 | 0.9467 |
| 0.0983 | 3.23 | 2800 | 0.3033 | 0.9333 |
| 0.0806 | 3.29 | 2850 | 0.2621 | 0.9467 |
| 0.0788 | 3.34 | 2900 | 0.2672 | 0.9467 |
| 0.0827 | 3.4 | 2950 | 0.2797 | 0.9467 |
| 0.0912 | 3.46 | 3000 | 0.2802 | 0.9467 |
| 0.0771 | 3.52 | 3050 | 0.2693 | 0.9467 |
| 0.0842 | 3.58 | 3100 | 0.2758 | 0.9467 |
| 0.086 | 3.63 | 3150 | 0.2921 | 0.9333 |
| 0.1102 | 3.69 | 3200 | 0.3066 | 0.9333 |
| 0.1124 | 3.75 | 3250 | 0.2808 | 0.9333 |
| 0.0762 | 3.81 | 3300 | 0.2863 | 0.94 |
| 0.074 | 3.86 | 3350 | 0.3159 | 0.9333 |
| 0.062 | 3.92 | 3400 | 0.2977 | 0.9333 |
| 0.1027 | 3.98 | 3450 | 0.3449 | 0.9267 |
| 0.0734 | 4.04 | 3500 | 0.3165 | 0.9333 |
| 0.0375 | 4.09 | 3550 | 0.2960 | 0.9333 |
| 0.0377 | 4.15 | 3600 | 0.3245 | 0.9333 |
| 0.0661 | 4.21 | 3650 | 0.3262 | 0.9333 |
| 0.079 | 4.27 | 3700 | 0.3085 | 0.9333 |
| 0.0801 | 4.33 | 3750 | 0.3219 | 0.9333 |
| 0.0865 | 4.38 | 3800 | 0.3336 | 0.9267 |
| 0.058 | 4.44 | 3850 | 0.3083 | 0.9333 |
| 0.0689 | 4.5 | 3900 | 0.3351 | 0.9267 |
| 0.0345 | 4.56 | 3950 | 0.3412 | 0.9267 |
| 0.0557 | 4.61 | 4000 | 0.3236 | 0.9333 |
| 0.0758 | 4.67 | 4050 | 0.3224 | 0.9333 |
| 0.0682 | 4.73 | 4100 | 0.3241 | 0.9333 |
| 0.0534 | 4.79 | 4150 | 0.3349 | 0.9333 |
| 0.0707 | 4.84 | 4200 | 0.3254 | 0.9333 |
| 0.0672 | 4.9 | 4250 | 0.3277 | 0.9333 |
| 0.1033 | 4.96 | 4300 | 0.3301 | 0.9333 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.3.2
- Tokenizers 0.13.1
|
burakyldrm/wav2vec2-burak-new-300-v2-5
|
burakyldrm
| 2022-11-09T18:46:28Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-09T12:37:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-burak-new-300-v2-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-burak-new-300-v2-5
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2833
- Wer: 0.2168
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 141
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 6.3785 | 9.8 | 500 | 3.1380 | 1.0 |
| 1.863 | 19.6 | 1000 | 0.3638 | 0.4659 |
| 0.524 | 29.41 | 1500 | 0.2742 | 0.3379 |
| 0.3581 | 39.21 | 2000 | 0.2746 | 0.3049 |
| 0.2783 | 49.02 | 2500 | 0.2559 | 0.2877 |
| 0.2378 | 58.82 | 3000 | 0.2613 | 0.2732 |
| 0.2062 | 68.62 | 3500 | 0.2499 | 0.2602 |
| 0.1849 | 78.43 | 4000 | 0.2809 | 0.2485 |
| 0.1663 | 88.23 | 4500 | 0.2768 | 0.2429 |
| 0.1526 | 98.04 | 5000 | 0.2767 | 0.2319 |
| 0.1434 | 107.84 | 5500 | 0.2886 | 0.2285 |
| 0.1338 | 117.64 | 6000 | 0.2808 | 0.2257 |
| 0.1313 | 127.45 | 6500 | 0.2835 | 0.2106 |
| 0.1281 | 137.25 | 7000 | 0.2833 | 0.2168 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
ManMir/ppo-LunarLander-v2
|
ManMir
| 2022-11-09T18:42:18Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-04T14:05:03Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 169.73 +/- 73.60
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
pig4431/TSE_ALBERT_5E
|
pig4431
| 2022-11-09T18:05:39Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-09T18:05:10Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: TSE_ALBERT_5E
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TSE_ALBERT_5E
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3667
- Accuracy: 0.9333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5712 | 0.06 | 50 | 0.4047 | 0.82 |
| 0.3198 | 0.12 | 100 | 0.2883 | 0.9 |
| 0.3254 | 0.17 | 150 | 0.4352 | 0.84 |
| 0.2898 | 0.23 | 200 | 0.2892 | 0.9133 |
| 0.2826 | 0.29 | 250 | 0.3565 | 0.8867 |
| 0.2696 | 0.35 | 300 | 0.2263 | 0.9333 |
| 0.274 | 0.4 | 350 | 0.2068 | 0.94 |
| 0.2393 | 0.46 | 400 | 0.2270 | 0.9333 |
| 0.2067 | 0.52 | 450 | 0.2118 | 0.9333 |
| 0.2332 | 0.58 | 500 | 0.4454 | 0.88 |
| 0.3099 | 0.63 | 550 | 0.2777 | 0.9067 |
| 0.2687 | 0.69 | 600 | 0.2077 | 0.9333 |
| 0.2053 | 0.75 | 650 | 0.1923 | 0.9533 |
| 0.2359 | 0.81 | 700 | 0.3891 | 0.9067 |
| 0.2492 | 0.87 | 750 | 0.2765 | 0.9333 |
| 0.2589 | 0.92 | 800 | 0.1879 | 0.9467 |
| 0.2161 | 0.98 | 850 | 0.2733 | 0.9267 |
| 0.1752 | 1.04 | 900 | 0.3108 | 0.92 |
| 0.2213 | 1.1 | 950 | 0.3318 | 0.92 |
| 0.1665 | 1.15 | 1000 | 0.4124 | 0.8933 |
| 0.1832 | 1.21 | 1050 | 0.3448 | 0.92 |
| 0.1671 | 1.27 | 1100 | 0.3343 | 0.9067 |
| 0.184 | 1.33 | 1150 | 0.3929 | 0.9067 |
| 0.2788 | 1.38 | 1200 | 0.3888 | 0.8933 |
| 0.1768 | 1.44 | 1250 | 0.4028 | 0.9 |
| 0.2368 | 1.5 | 1300 | 0.3154 | 0.9133 |
| 0.2055 | 1.56 | 1350 | 0.2603 | 0.9267 |
| 0.1693 | 1.61 | 1400 | 0.2994 | 0.9267 |
| 0.1447 | 1.67 | 1450 | 0.3247 | 0.9267 |
| 0.226 | 1.73 | 1500 | 0.3410 | 0.9267 |
| 0.1744 | 1.79 | 1550 | 0.3105 | 0.9267 |
| 0.1943 | 1.85 | 1600 | 0.2760 | 0.94 |
| 0.2093 | 1.9 | 1650 | 0.2087 | 0.9467 |
| 0.2027 | 1.96 | 1700 | 0.2773 | 0.9333 |
| 0.1806 | 2.02 | 1750 | 0.3386 | 0.9267 |
| 0.1161 | 2.08 | 1800 | 0.4301 | 0.9067 |
| 0.0916 | 2.13 | 1850 | 0.3693 | 0.92 |
| 0.1586 | 2.19 | 1900 | 0.2929 | 0.94 |
| 0.1336 | 2.25 | 1950 | 0.4015 | 0.9133 |
| 0.1746 | 2.31 | 2000 | 0.3027 | 0.92 |
| 0.1353 | 2.36 | 2050 | 0.3224 | 0.9267 |
| 0.116 | 2.42 | 2100 | 0.3609 | 0.9267 |
| 0.1807 | 2.48 | 2150 | 0.3044 | 0.9267 |
| 0.1016 | 2.54 | 2200 | 0.3706 | 0.9133 |
| 0.0634 | 2.6 | 2250 | 0.3391 | 0.92 |
| 0.167 | 2.65 | 2300 | 0.3463 | 0.92 |
| 0.1718 | 2.71 | 2350 | 0.3254 | 0.92 |
| 0.1269 | 2.77 | 2400 | 0.2640 | 0.9333 |
| 0.1848 | 2.83 | 2450 | 0.2660 | 0.9267 |
| 0.116 | 2.88 | 2500 | 0.2532 | 0.94 |
| 0.1804 | 2.94 | 2550 | 0.3538 | 0.92 |
| 0.1315 | 3.0 | 2600 | 0.4146 | 0.9067 |
| 0.1024 | 3.06 | 2650 | 0.2899 | 0.9333 |
| 0.0904 | 3.11 | 2700 | 0.3191 | 0.9333 |
| 0.0596 | 3.17 | 2750 | 0.3569 | 0.9333 |
| 0.1144 | 3.23 | 2800 | 0.3373 | 0.9267 |
| 0.0782 | 3.29 | 2850 | 0.3447 | 0.9267 |
| 0.064 | 3.34 | 2900 | 0.2932 | 0.94 |
| 0.118 | 3.4 | 2950 | 0.3099 | 0.94 |
| 0.1286 | 3.46 | 3000 | 0.3404 | 0.9267 |
| 0.0963 | 3.52 | 3050 | 0.4026 | 0.9067 |
| 0.1158 | 3.58 | 3100 | 0.3320 | 0.9267 |
| 0.0967 | 3.63 | 3150 | 0.2984 | 0.94 |
| 0.1122 | 3.69 | 3200 | 0.3149 | 0.9333 |
| 0.134 | 3.75 | 3250 | 0.3804 | 0.9133 |
| 0.0953 | 3.81 | 3300 | 0.3670 | 0.92 |
| 0.0776 | 3.86 | 3350 | 0.4140 | 0.92 |
| 0.0813 | 3.92 | 3400 | 0.3654 | 0.9333 |
| 0.0406 | 3.98 | 3450 | 0.4364 | 0.92 |
| 0.0538 | 4.04 | 3500 | 0.3553 | 0.94 |
| 0.0734 | 4.09 | 3550 | 0.3814 | 0.9267 |
| 0.0396 | 4.15 | 3600 | 0.3978 | 0.9267 |
| 0.0427 | 4.21 | 3650 | 0.4333 | 0.92 |
| 0.1472 | 4.27 | 3700 | 0.3816 | 0.92 |
| 0.0587 | 4.33 | 3750 | 0.3624 | 0.92 |
| 0.0549 | 4.38 | 3800 | 0.3461 | 0.9333 |
| 0.0606 | 4.44 | 3850 | 0.3562 | 0.94 |
| 0.0483 | 4.5 | 3900 | 0.3655 | 0.9333 |
| 0.0351 | 4.56 | 3950 | 0.3613 | 0.9333 |
| 0.0763 | 4.61 | 4000 | 0.3641 | 0.94 |
| 0.0835 | 4.67 | 4050 | 0.3669 | 0.9333 |
| 0.0542 | 4.73 | 4100 | 0.3569 | 0.9333 |
| 0.0804 | 4.79 | 4150 | 0.3575 | 0.9333 |
| 0.0336 | 4.84 | 4200 | 0.3655 | 0.9333 |
| 0.0631 | 4.9 | 4250 | 0.3646 | 0.9333 |
| 0.0183 | 4.96 | 4300 | 0.3667 | 0.9333 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.3.2
- Tokenizers 0.13.1
|
JoanWaweru/Code-SwitchedSentimentAnalysis
|
JoanWaweru
| 2022-11-09T17:59:01Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-11-09T17:14:49Z |
# IS2Project
This is a Customer Sentiment Analysis for Code-Switched Language: A Case of Safaricom Limited. The proposed model will be able to detect customer sentiment analysis in the code-switched pair (English-Swahili) for Safaricom users using Support Vector Machines. The model will be able to categorize tweets into good reviews and bad reviews.
The model is also compared with Logistic Regression and Naives Bayes to see which model performs the best.
|
Watwat100/pls
|
Watwat100
| 2022-11-09T17:51:10Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-09T17:50:52Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 88 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 88,
"warmup_steps": 9,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
agvelu/xlm-roberta-base-finetuned-panx-de
|
agvelu
| 2022-11-09T17:46:35Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-08T01:06:13Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
DavidNo/albert-xxlarge-v2-finetuned-squadv2
|
DavidNo
| 2022-11-09T17:13:35Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"albert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-09T12:25:34Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: DavidNo/albert-xxlarge-v2-finetuned-squadv2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# DavidNo/albert-xxlarge-v2-finetuned-squadv2
This model is a fine-tuned version of [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7633
- Train End Logits Accuracy: 0.6680
- Train Start Logits Accuracy: 0.6407
- Validation Loss: 1.1441
- Validation End Logits Accuracy: 0.5277
- Validation Start Logits Accuracy: 0.5106
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16494, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.0842 | 0.6032 | 0.5767 | 1.1372 | 0.5166 | 0.5058 | 0 |
| 0.7633 | 0.6680 | 0.6407 | 1.1441 | 0.5277 | 0.5106 | 1 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
pig4431/TSE_BERT_5E
|
pig4431
| 2022-11-09T16:54:29Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-09T16:52:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: TSE_BERT_5E
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TSE_BERT_5E
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3664
- Accuracy: 0.9267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6836 | 0.06 | 50 | 0.5614 | 0.8267 |
| 0.4679 | 0.12 | 100 | 0.3521 | 0.9 |
| 0.3325 | 0.17 | 150 | 0.2747 | 0.8933 |
| 0.2493 | 0.23 | 200 | 0.2712 | 0.9067 |
| 0.273 | 0.29 | 250 | 0.2304 | 0.9333 |
| 0.2888 | 0.35 | 300 | 0.2253 | 0.92 |
| 0.2558 | 0.4 | 350 | 0.2110 | 0.9267 |
| 0.1997 | 0.46 | 400 | 0.2206 | 0.9267 |
| 0.2748 | 0.52 | 450 | 0.2358 | 0.9267 |
| 0.2448 | 0.58 | 500 | 0.2942 | 0.8933 |
| 0.2247 | 0.63 | 550 | 0.2410 | 0.9067 |
| 0.2002 | 0.69 | 600 | 0.2222 | 0.9133 |
| 0.2668 | 0.75 | 650 | 0.2372 | 0.9133 |
| 0.2701 | 0.81 | 700 | 0.2288 | 0.9333 |
| 0.2034 | 0.87 | 750 | 0.2415 | 0.9267 |
| 0.2374 | 0.92 | 800 | 0.2278 | 0.92 |
| 0.2305 | 0.98 | 850 | 0.2270 | 0.92 |
| 0.1704 | 1.04 | 900 | 0.2591 | 0.9333 |
| 0.1826 | 1.1 | 950 | 0.2481 | 0.9267 |
| 0.1116 | 1.15 | 1000 | 0.2906 | 0.9133 |
| 0.1527 | 1.21 | 1050 | 0.2902 | 0.92 |
| 0.1692 | 1.27 | 1100 | 0.2489 | 0.9333 |
| 0.158 | 1.33 | 1150 | 0.2576 | 0.9333 |
| 0.1608 | 1.38 | 1200 | 0.3344 | 0.9267 |
| 0.1194 | 1.44 | 1250 | 0.3615 | 0.9267 |
| 0.201 | 1.5 | 1300 | 0.3374 | 0.92 |
| 0.1938 | 1.56 | 1350 | 0.2847 | 0.92 |
| 0.1479 | 1.61 | 1400 | 0.3044 | 0.9267 |
| 0.1628 | 1.67 | 1450 | 0.2980 | 0.9267 |
| 0.1783 | 1.73 | 1500 | 0.3132 | 0.9267 |
| 0.1885 | 1.79 | 1550 | 0.2676 | 0.9333 |
| 0.1651 | 1.85 | 1600 | 0.2709 | 0.9333 |
| 0.1376 | 1.9 | 1650 | 0.2777 | 0.94 |
| 0.1571 | 1.96 | 1700 | 0.2761 | 0.9333 |
| 0.1561 | 2.02 | 1750 | 0.2912 | 0.94 |
| 0.1187 | 2.08 | 1800 | 0.2893 | 0.9467 |
| 0.1205 | 2.13 | 1850 | 0.2882 | 0.9467 |
| 0.0751 | 2.19 | 1900 | 0.3032 | 0.9467 |
| 0.1412 | 2.25 | 1950 | 0.2926 | 0.9467 |
| 0.0783 | 2.31 | 2000 | 0.2962 | 0.9467 |
| 0.1094 | 2.36 | 2050 | 0.2909 | 0.9333 |
| 0.1158 | 2.42 | 2100 | 0.3087 | 0.9333 |
| 0.0606 | 2.48 | 2150 | 0.3102 | 0.9467 |
| 0.1164 | 2.54 | 2200 | 0.2812 | 0.94 |
| 0.1311 | 2.6 | 2250 | 0.3736 | 0.9267 |
| 0.1087 | 2.65 | 2300 | 0.3069 | 0.94 |
| 0.109 | 2.71 | 2350 | 0.3176 | 0.94 |
| 0.0789 | 2.77 | 2400 | 0.3130 | 0.94 |
| 0.0784 | 2.83 | 2450 | 0.3338 | 0.94 |
| 0.1388 | 2.88 | 2500 | 0.3440 | 0.9333 |
| 0.1062 | 2.94 | 2550 | 0.2883 | 0.94 |
| 0.1016 | 3.0 | 2600 | 0.2776 | 0.94 |
| 0.0642 | 3.06 | 2650 | 0.3302 | 0.9333 |
| 0.052 | 3.11 | 2700 | 0.3217 | 0.94 |
| 0.0539 | 3.17 | 2750 | 0.3899 | 0.9267 |
| 0.0593 | 3.23 | 2800 | 0.3283 | 0.9467 |
| 0.0468 | 3.29 | 2850 | 0.3382 | 0.9467 |
| 0.0546 | 3.34 | 2900 | 0.3133 | 0.9467 |
| 0.107 | 3.4 | 2950 | 0.3550 | 0.94 |
| 0.1079 | 3.46 | 3000 | 0.3484 | 0.94 |
| 0.0782 | 3.52 | 3050 | 0.3313 | 0.94 |
| 0.0635 | 3.58 | 3100 | 0.3418 | 0.94 |
| 0.0771 | 3.63 | 3150 | 0.3685 | 0.9333 |
| 0.0629 | 3.69 | 3200 | 0.3467 | 0.9333 |
| 0.0552 | 3.75 | 3250 | 0.3677 | 0.94 |
| 0.0531 | 3.81 | 3300 | 0.3436 | 0.9333 |
| 0.0819 | 3.86 | 3350 | 0.3802 | 0.9333 |
| 0.0583 | 3.92 | 3400 | 0.3441 | 0.9333 |
| 0.0434 | 3.98 | 3450 | 0.3666 | 0.9333 |
| 0.0747 | 4.04 | 3500 | 0.3554 | 0.9333 |
| 0.0309 | 4.09 | 3550 | 0.3582 | 0.9333 |
| 0.1057 | 4.15 | 3600 | 0.3615 | 0.9267 |
| 0.0391 | 4.21 | 3650 | 0.3583 | 0.9267 |
| 0.0433 | 4.27 | 3700 | 0.3514 | 0.9333 |
| 0.0597 | 4.33 | 3750 | 0.3580 | 0.9333 |
| 0.0663 | 4.38 | 3800 | 0.3390 | 0.94 |
| 0.0563 | 4.44 | 3850 | 0.3518 | 0.9267 |
| 0.0702 | 4.5 | 3900 | 0.3542 | 0.9267 |
| 0.0383 | 4.56 | 3950 | 0.3528 | 0.9267 |
| 0.0474 | 4.61 | 4000 | 0.3485 | 0.9333 |
| 0.0265 | 4.67 | 4050 | 0.3489 | 0.94 |
| 0.0165 | 4.73 | 4100 | 0.3616 | 0.9333 |
| 0.0489 | 4.79 | 4150 | 0.3579 | 0.9333 |
| 0.0478 | 4.84 | 4200 | 0.3603 | 0.9333 |
| 0.0536 | 4.9 | 4250 | 0.3666 | 0.9267 |
| 0.0551 | 4.96 | 4300 | 0.3664 | 0.9267 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.3.2
- Tokenizers 0.13.1
|
GItaf/JointGPT2-warmup-from-CLS
|
GItaf
| 2022-11-09T16:52:19Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-09T13:41:28Z |
---
tags:
- generated_from_trainer
model-index:
- name: GPT2-CLS-Finetuned-MBTI-GPT2-CLS-Finetuned-MBTI-JointGPT2-Warmup-from-CLS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GPT2-CLS-Finetuned-MBTI-GPT2-CLS-Finetuned-MBTI-JointGPT2-Warmup-from-CLS
This model is a fine-tuned version of [GItaf/GPT2-CLS-Finetuned-MBTI](https://huggingface.co/GItaf/GPT2-CLS-Finetuned-MBTI) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
NineArtsDragon/bert-finetuned-ner
|
NineArtsDragon
| 2022-11-09T16:47:02Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-09T02:13:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 120 | 0.0053 | 0.8410 | 0.9372 | 0.8865 | 0.9991 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
NoCrypt/Anything-v3-0
|
NoCrypt
| 2022-11-09T16:46:59Z | 0 | 4 | null |
[
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2022-11-09T16:44:02Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
|
erikdavidsson42/distilbert-base-uncased-finetuned-medium
|
erikdavidsson42
| 2022-11-09T16:40:55Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-07T20:08:21Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: erikdavidsson42/distilbert-base-uncased-finetuned-medium
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# erikdavidsson42/distilbert-base-uncased-finetuned-medium
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.9469
- Validation Loss: 2.7043
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7567, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.9469 | 2.7043 | 0 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.5.0
- Datasets 2.6.1
- Tokenizers 0.13.2
|
aajrami/bert-mlm-base
|
aajrami
| 2022-11-09T16:16:14Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"bert",
"license:cc-by-4.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-05-12T20:46:05Z |
---
tags:
- bert
license: cc-by-4.0
---
## bert-mlm-base
A BERT base Language Model with an **MLM** pre-training objective. For more details about the pre-training objective and the pre-training hyperparameters, please refer to [How does the pre-training objective affect what large language models learn about linguistic properties?](https://aclanthology.org/2022.acl-short.16/)
## License
CC BY 4.0
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{alajrami2022does,
title={How does the pre-training objective affect what large language models learn about linguistic properties?},
author={Alajrami, Ahmed and Aletras, Nikolaos},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
pages={131--147},
year={2022}
}
```
|
aajrami/bert-ascii-medium
|
aajrami
| 2022-11-09T16:14:30Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"bert",
"license:cc-by-4.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-08T22:22:19Z |
---
tags:
- bert
license: cc-by-4.0
---
## bert-ascii-medium
A medium-size BERT Language Model pre-trained by predicting the summation of the **ASCII** code values of the characters in a masked token as a pre-training objective. For more details about the pre-training objective and the pre-training hyperparameters, please refer to [How does the pre-training objective affect what large language models learn about linguistic properties?](https://aclanthology.org/2022.acl-short.16/)
## License
CC BY 4.0
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{alajrami2022does,
title={How does the pre-training objective affect what large language models learn about linguistic properties?},
author={Alajrami, Ahmed and Aletras, Nikolaos},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
pages={131--147},
year={2022}
}
```
|
OFA-Sys/ofa-medium
|
OFA-Sys
| 2022-11-09T15:51:47Z | 49 | 5 |
transformers
|
[
"transformers",
"pytorch",
"ofa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-04-28T07:13:38Z |
---
license: apache-2.0
---
# OFA-medium
## Introduction
This is the **medium** version of OFA pretrained model. OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image generation, visual grounding, image captioning, image classification, text generation, etc.) to a simple sequence-to-sequence learning framework.
The directory includes 4 files, namely `config.json` which consists of model configuration, `vocab.json` and `merge.txt` for our OFA tokenizer, and lastly `pytorch_model.bin` which consists of model weights. There is no need to worry about the mismatch between Fairseq and transformers, since we have addressed the issue yet.
## How to use
To use it in transformers, please refer to https://github.com/OFA-Sys/OFA/tree/feature/add_transformers. Install the transformers and download the models as shown below.
```bash
git clone --single-branch --branch feature/add_transformers https://github.com/OFA-Sys/OFA.git
pip install OFA/transformers/
git clone https://huggingface.co/OFA-Sys/OFA-medium
```
After, refer the path to OFA-medium to `ckpt_dir`, and prepare an image for the testing example below. Also, ensure that you have pillow and torchvision in your environment.
```python
>>> from PIL import Image
>>> from torchvision import transforms
>>> from transformers import OFATokenizer, OFAModel
>>> from generate import sequence_generator
>>> mean, std = [0.5, 0.5, 0.5], [0.5, 0.5, 0.5]
>>> resolution = 256
>>> patch_resize_transform = transforms.Compose([
lambda image: image.convert("RGB"),
transforms.Resize((resolution, resolution), interpolation=Image.BICUBIC),
transforms.ToTensor(),
transforms.Normalize(mean=mean, std=std)
])
>>> tokenizer = OFATokenizer.from_pretrained(ckpt_dir)
>>> txt = " what does the image describe?"
>>> inputs = tokenizer([txt], return_tensors="pt").input_ids
>>> img = Image.open(path_to_image)
>>> patch_img = patch_resize_transform(img).unsqueeze(0)
# using the generator of fairseq version
>>> model = OFAModel.from_pretrained(ckpt_dir, use_cache=True)
>>> generator = sequence_generator.SequenceGenerator(
tokenizer=tokenizer,
beam_size=5,
max_len_b=16,
min_len=0,
no_repeat_ngram_size=3,
)
>>> data = {}
>>> data["net_input"] = {"input_ids": inputs, 'patch_images': patch_img, 'patch_masks':torch.tensor([True])}
>>> gen_output = generator.generate([model], data)
>>> gen = [gen_output[i][0]["tokens"] for i in range(len(gen_output))]
# using the generator of huggingface version
>>> model = OFAModel.from_pretrained(ckpt_dir, use_cache=False)
>>> gen = model.generate(inputs, patch_images=patch_img, num_beams=5, no_repeat_ngram_size=3)
>>> print(tokenizer.batch_decode(gen, skip_special_tokens=True))
```
|
OFA-Sys/ofa-tiny
|
OFA-Sys
| 2022-11-09T15:51:26Z | 61 | 5 |
transformers
|
[
"transformers",
"pytorch",
"ofa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-04-28T06:16:45Z |
---
license: apache-2.0
---
# OFA-tiny
## Introduction
This is the **tiny** version of OFA pretrained model. OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image generation, visual grounding, image captioning, image classification, text generation, etc.) to a simple sequence-to-sequence learning framework.
The directory includes 4 files, namely `config.json` which consists of model configuration, `vocab.json` and `merge.txt` for our OFA tokenizer, and lastly `pytorch_model.bin` which consists of model weights. There is no need to worry about the mismatch between Fairseq and transformers, since we have addressed the issue yet.
## How to use
To use it in transformers, please refer to https://github.com/OFA-Sys/OFA/tree/feature/add_transformers. Install the transformers and download the models as shown below.
```bash
git clone --single-branch --branch feature/add_transformers https://github.com/OFA-Sys/OFA.git
pip install OFA/transformers/
git clone https://huggingface.co/OFA-Sys/OFA-tiny
```
After, refer the path to OFA-tiny to `ckpt_dir`, and prepare an image for the testing example below. Also, ensure that you have pillow and torchvision in your environment.
```python
>>> from PIL import Image
>>> from torchvision import transforms
>>> from transformers import OFATokenizer, OFAModel
>>> from generate import sequence_generator
>>> mean, std = [0.5, 0.5, 0.5], [0.5, 0.5, 0.5]
>>> resolution = 256
>>> patch_resize_transform = transforms.Compose([
lambda image: image.convert("RGB"),
transforms.Resize((resolution, resolution), interpolation=Image.BICUBIC),
transforms.ToTensor(),
transforms.Normalize(mean=mean, std=std)
])
>>> tokenizer = OFATokenizer.from_pretrained(ckpt_dir)
>>> txt = " what does the image describe?"
>>> inputs = tokenizer([txt], return_tensors="pt").input_ids
>>> img = Image.open(path_to_image)
>>> patch_img = patch_resize_transform(img).unsqueeze(0)
# using the generator of fairseq version
>>> model = OFAModel.from_pretrained(ckpt_dir, use_cache=True)
>>> generator = sequence_generator.SequenceGenerator(
tokenizer=tokenizer,
beam_size=5,
max_len_b=16,
min_len=0,
no_repeat_ngram_size=3,
)
>>> data = {}
>>> data["net_input"] = {"input_ids": inputs, 'patch_images': patch_img, 'patch_masks':torch.tensor([True])}
>>> gen_output = generator.generate([model], data)
>>> gen = [gen_output[i][0]["tokens"] for i in range(len(gen_output))]
# using the generator of huggingface version
>>> model = OFAModel.from_pretrained(ckpt_dir, use_cache=False)
>>> gen = model.generate(inputs, patch_images=patch_img, num_beams=5, no_repeat_ngram_size=3)
>>> print(tokenizer.batch_decode(gen, skip_special_tokens=True))
```
|
OFA-Sys/ofa-large
|
OFA-Sys
| 2022-11-09T15:50:37Z | 75 | 12 |
transformers
|
[
"transformers",
"pytorch",
"ofa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-04-28T07:41:55Z |
---
license: apache-2.0
---
# OFA-large
## Introduction
This is the **large** version of OFA pretrained model. OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image generation, visual grounding, image captioning, image classification, text generation, etc.) to a simple sequence-to-sequence learning framework.
The directory includes 4 files, namely `config.json` which consists of model configuration, `vocab.json` and `merge.txt` for our OFA tokenizer, and lastly `pytorch_model.bin` which consists of model weights. There is no need to worry about the mismatch between Fairseq and transformers, since we have addressed the issue yet.
## How to use
To use it in transformers, please refer to https://github.com/OFA-Sys/OFA/tree/feature/add_transformers. Install the transformers and download the models as shown below.
```bash
git clone --single-branch --branch feature/add_transformers https://github.com/OFA-Sys/OFA.git
pip install OFA/transformers/
git clone https://huggingface.co/OFA-Sys/OFA-large
```
After, refer the path to OFA-large to `ckpt_dir`, and prepare an image for the testing example below. Also, ensure that you have pillow and torchvision in your environment.
```python
>>> from PIL import Image
>>> from torchvision import transforms
>>> from transformers import OFATokenizer, OFAModel
>>> from generate import sequence_generator
>>> mean, std = [0.5, 0.5, 0.5], [0.5, 0.5, 0.5]
>>> resolution = 480
>>> patch_resize_transform = transforms.Compose([
lambda image: image.convert("RGB"),
transforms.Resize((resolution, resolution), interpolation=Image.BICUBIC),
transforms.ToTensor(),
transforms.Normalize(mean=mean, std=std)
])
>>> tokenizer = OFATokenizer.from_pretrained(ckpt_dir)
>>> txt = " what does the image describe?"
>>> inputs = tokenizer([txt], return_tensors="pt").input_ids
>>> img = Image.open(path_to_image)
>>> patch_img = patch_resize_transform(img).unsqueeze(0)
# using the generator of fairseq version
>>> model = OFAModel.from_pretrained(ckpt_dir, use_cache=True)
>>> generator = sequence_generator.SequenceGenerator(
tokenizer=tokenizer,
beam_size=5,
max_len_b=16,
min_len=0,
no_repeat_ngram_size=3,
)
>>> data = {}
>>> data["net_input"] = {"input_ids": inputs, 'patch_images': patch_img, 'patch_masks':torch.tensor([True])}
>>> gen_output = generator.generate([model], data)
>>> gen = [gen_output[i][0]["tokens"] for i in range(len(gen_output))]
# using the generator of huggingface version
>>> model = OFAModel.from_pretrained(ckpt_dir, use_cache=False)
>>> gen = model.generate(inputs, patch_images=patch_img, num_beams=5, no_repeat_ngram_size=3)
>>> print(tokenizer.batch_decode(gen, skip_special_tokens=True))
```
|
OFA-Sys/ofa-base
|
OFA-Sys
| 2022-11-09T15:50:09Z | 364 | 15 |
transformers
|
[
"transformers",
"pytorch",
"ofa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-04-28T07:27:45Z |
---
license: apache-2.0
---
# OFA-base
## Introduction
This is the **base** version of OFA pretrained model. OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image generation, visual grounding, image captioning, image classification, text generation, etc.) to a simple sequence-to-sequence learning framework.
The directory includes 4 files, namely `config.json` which consists of model configuration, `vocab.json` and `merge.txt` for our OFA tokenizer, and lastly `pytorch_model.bin` which consists of model weights. There is no need to worry about the mismatch between Fairseq and transformers, since we have addressed the issue yet.
## How to use
To use it in transformers, please refer to https://github.com/OFA-Sys/OFA/tree/feature/add_transformers. Install the transformers and download the models as shown below.
```bash
git clone --single-branch --branch feature/add_transformers https://github.com/OFA-Sys/OFA.git
pip install OFA/transformers/
git clone https://huggingface.co/OFA-Sys/OFA-base
```
After, refer the path to OFA-base to `ckpt_dir`, and prepare an image for the testing example below. Also, ensure that you have pillow and torchvision in your environment.
```python
>>> from PIL import Image
>>> from torchvision import transforms
>>> from transformers import OFATokenizer, OFAModel
>>> from generate import sequence_generator
>>> mean, std = [0.5, 0.5, 0.5], [0.5, 0.5, 0.5]
>>> resolution = 384
>>> patch_resize_transform = transforms.Compose([
lambda image: image.convert("RGB"),
transforms.Resize((resolution, resolution), interpolation=Image.BICUBIC),
transforms.ToTensor(),
transforms.Normalize(mean=mean, std=std)
])
>>> tokenizer = OFATokenizer.from_pretrained(ckpt_dir)
>>> txt = " what does the image describe?"
>>> inputs = tokenizer([txt], return_tensors="pt").input_ids
>>> img = Image.open(path_to_image)
>>> patch_img = patch_resize_transform(img).unsqueeze(0)
# using the generator of fairseq version
>>> model = OFAModel.from_pretrained(ckpt_dir, use_cache=True)
>>> generator = sequence_generator.SequenceGenerator(
tokenizer=tokenizer,
beam_size=5,
max_len_b=16,
min_len=0,
no_repeat_ngram_size=3,
)
>>> data = {}
>>> data["net_input"] = {"input_ids": inputs, 'patch_images': patch_img, 'patch_masks':torch.tensor([True])}
>>> gen_output = generator.generate([model], data)
>>> gen = [gen_output[i][0]["tokens"] for i in range(len(gen_output))]
# using the generator of huggingface version
>>> model = OFAModel.from_pretrained(ckpt_dir, use_cache=False)
>>> gen = model.generate(inputs, patch_images=patch_img, num_beams=5, no_repeat_ngram_size=3)
>>> print(tokenizer.batch_decode(gen, skip_special_tokens=True))
```
|
dodge99/a2c-AntBulletEnv-v0-short-training
|
dodge99
| 2022-11-09T15:35:34Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-09T15:34:28Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 376.30 +/- 46.89
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
crodri/wikicat_ca
|
crodri
| 2022-11-09T15:23:23Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"text-classification",
"ca",
"dataset:projecte-aina/WikiCAT_ca",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-09T14:18:46Z |
---
tags:
- autotrain
- text-classification
language:
- ca
widget:
- text: "Aquest dissabte, Francesc Solé va arribar a la meta a Ordino com el guanyador del Ultra Trail d'Andorra després de 170km amb un desnivell altitudinal de 13 500 metres, en un temps de 31 hores i 9 minuts."
- text: "Una cançó és una composició musical que conté, a vegades, una part amb veu o melodia vocal, és a dir, amb text, cantada, però també pot ser simplement un conjunt de notes tocades sistemàticament, formant un ritme."
datasets:
- projecte-aina/WikiCAT_ca
co2_eq_emissions:
emissions: 47.543878831739285
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 2036166932
- CO2 Emissions (in grams): 47.5439
## Validation Metrics
- Loss: 0.701
- Accuracy: 0.787
- Macro F1: 0.776
- Micro F1: 0.787
- Weighted F1: 0.784
- Macro Precision: 0.786
- Micro Precision: 0.787
- Weighted Precision: 0.788
- Macro Recall: 0.775
- Micro Recall: 0.787
- Weighted Recall: 0.787
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/crodri/autotrain-wikicat_ca-2036166932
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("crodri/wikicat_ca", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("crodri/wikicat_ca", use_auth_token=True)
inputs = tokenizer("Una cançó és una composició musical que conté, a vegades, una part amb veu o melodia vocal, és a dir, amb text, cantada, però també pot ser simplement un conjunt de notes tocades sistemàticament, formant un ritme.", return_tensors="pt")
outputs = model(**inputs)
```
|
pig4431/TSE_fewshot
|
pig4431
| 2022-11-09T14:48:15Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-09T14:48:01Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 80 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 80,
"warmup_steps": 8,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Marre-Barre/smthdssmth
|
Marre-Barre
| 2022-11-09T13:58:21Z | 0 | 9 | null |
[
"region:us"
] | null | 2022-11-09T13:06:39Z |
prompt: {{replace this with subject}}, art by smthdssmth
negative prompt: heavy contrast, out of focus, cropped, low details, deformed, ugly
scale: 8
steps: 50
art by smthdssmth is the keyword
|
sakib131/whisper-small-bn
|
sakib131
| 2022-11-09T13:04:48Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"bn",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-08T10:01:56Z |
---
language:
- bn
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Hi - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 40.38828355674009
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1305
- Wer: 40.3883
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1994 | 0.16 | 1000 | 0.2302 | 58.9519 |
| 0.1424 | 0.32 | 2000 | 0.1697 | 48.0494 |
| 0.1379 | 0.48 | 3000 | 0.1434 | 43.1854 |
| 0.1209 | 0.64 | 4000 | 0.1305 | 40.3883 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
edbeeching/dmlab_30_1111
|
edbeeching
| 2022-11-09T13:01:57Z | 5 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-09T12:59:54Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: dmlab_30
type: dmlab_30
metrics:
- type: mean_reward
value: 9.18 +/- 0.64
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **dmlab_30** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
AlekseyKorshuk/dalio-1.3b-test
|
AlekseyKorshuk
| 2022-11-09T12:02:54Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-08T23:12:16Z |
---
license: other
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dalio-1.3b-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dalio-1.3b-test
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6035
- Accuracy: 0.0672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6133 | 0.08 | 1 | 2.625 | 0.0652 |
| 2.6199 | 0.15 | 2 | 2.625 | 0.0652 |
| 2.7202 | 0.23 | 3 | 2.6113 | 0.0658 |
| 2.6177 | 0.31 | 4 | 2.6113 | 0.0658 |
| 2.5422 | 0.38 | 5 | 2.5703 | 0.0661 |
| 2.5627 | 0.46 | 6 | 2.5566 | 0.0662 |
| 2.5784 | 0.54 | 7 | 2.5469 | 0.0664 |
| 2.5264 | 0.62 | 8 | 2.5371 | 0.0663 |
| 2.3396 | 0.69 | 9 | 2.5332 | 0.0670 |
| 2.4297 | 0.77 | 10 | 2.5273 | 0.0673 |
| 2.3914 | 0.85 | 11 | 2.5234 | 0.0672 |
| 2.429 | 0.92 | 12 | 2.5195 | 0.0671 |
| 2.3055 | 1.0 | 13 | 2.5117 | 0.0672 |
| 1.7162 | 1.08 | 14 | 2.5215 | 0.0672 |
| 1.7264 | 1.15 | 15 | 2.5469 | 0.0677 |
| 1.7559 | 1.23 | 16 | 2.5879 | 0.0671 |
| 1.7899 | 1.31 | 17 | 2.6113 | 0.0667 |
| 1.6465 | 1.38 | 18 | 2.6191 | 0.0666 |
| 1.5955 | 1.46 | 19 | 2.6074 | 0.0671 |
| 1.5389 | 1.54 | 20 | 2.5957 | 0.0672 |
| 1.5356 | 1.62 | 21 | 2.5859 | 0.0670 |
| 1.386 | 1.69 | 22 | 2.5820 | 0.0672 |
| 1.7698 | 1.77 | 23 | 2.5742 | 0.0670 |
| 1.3923 | 1.85 | 24 | 2.5801 | 0.0669 |
| 1.4723 | 1.92 | 25 | 2.5898 | 0.0672 |
| 1.5653 | 2.0 | 26 | 2.6035 | 0.0672 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
bigmorning/whisper_nosp_0020
|
bigmorning
| 2022-11-09T11:48:16Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-09T11:48:06Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: whisper_nosp_0020
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_nosp_0020
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1825
- Train Accuracy: 0.0228
- Validation Loss: 0.8115
- Validation Accuracy: 0.0203
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 7.5559 | 0.0010 | 6.3853 | 0.0013 | 0 |
| 6.3227 | 0.0021 | 5.7023 | 0.0038 | 1 |
| 4.9825 | 0.0063 | 3.6302 | 0.0109 | 2 |
| 2.9413 | 0.0126 | 2.1959 | 0.0154 | 3 |
| 1.9349 | 0.0157 | 1.6630 | 0.0172 | 4 |
| 1.4741 | 0.0171 | 1.3813 | 0.0181 | 5 |
| 1.1975 | 0.0181 | 1.2161 | 0.0186 | 6 |
| 1.0048 | 0.0188 | 1.0990 | 0.0191 | 7 |
| 0.8598 | 0.0194 | 1.0165 | 0.0194 | 8 |
| 0.7431 | 0.0199 | 0.9603 | 0.0196 | 9 |
| 0.6489 | 0.0203 | 0.9106 | 0.0198 | 10 |
| 0.5682 | 0.0207 | 0.8787 | 0.0199 | 11 |
| 0.4985 | 0.0210 | 0.8548 | 0.0200 | 12 |
| 0.4372 | 0.0213 | 0.8352 | 0.0201 | 13 |
| 0.3829 | 0.0216 | 0.8190 | 0.0202 | 14 |
| 0.3327 | 0.0219 | 0.8148 | 0.0202 | 15 |
| 0.2904 | 0.0221 | 0.8139 | 0.0202 | 16 |
| 0.2492 | 0.0224 | 0.8188 | 0.0202 | 17 |
| 0.2140 | 0.0226 | 0.8146 | 0.0203 | 18 |
| 0.1825 | 0.0228 | 0.8115 | 0.0203 | 19 |
### Framework versions
- Transformers 4.25.0.dev0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
bigmorning/whisper_nosp_0010
|
bigmorning
| 2022-11-09T11:04:17Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-09T11:04:06Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: whisper_nosp_0010
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_nosp_0010
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7431
- Train Accuracy: 0.0199
- Validation Loss: 0.9603
- Validation Accuracy: 0.0196
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 7.5559 | 0.0010 | 6.3853 | 0.0013 | 0 |
| 6.3227 | 0.0021 | 5.7023 | 0.0038 | 1 |
| 4.9825 | 0.0063 | 3.6302 | 0.0109 | 2 |
| 2.9413 | 0.0126 | 2.1959 | 0.0154 | 3 |
| 1.9349 | 0.0157 | 1.6630 | 0.0172 | 4 |
| 1.4741 | 0.0171 | 1.3813 | 0.0181 | 5 |
| 1.1975 | 0.0181 | 1.2161 | 0.0186 | 6 |
| 1.0048 | 0.0188 | 1.0990 | 0.0191 | 7 |
| 0.8598 | 0.0194 | 1.0165 | 0.0194 | 8 |
| 0.7431 | 0.0199 | 0.9603 | 0.0196 | 9 |
### Framework versions
- Transformers 4.25.0.dev0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
bigmorning/whisper_nosp_0005
|
bigmorning
| 2022-11-09T10:42:15Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-09T10:42:03Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: whisper_nosp_0005
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_nosp_0005
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.9349
- Train Accuracy: 0.0157
- Validation Loss: 1.6630
- Validation Accuracy: 0.0172
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 7.5559 | 0.0010 | 6.3853 | 0.0013 | 0 |
| 6.3227 | 0.0021 | 5.7023 | 0.0038 | 1 |
| 4.9825 | 0.0063 | 3.6302 | 0.0109 | 2 |
| 2.9413 | 0.0126 | 2.1959 | 0.0154 | 3 |
| 1.9349 | 0.0157 | 1.6630 | 0.0172 | 4 |
### Framework versions
- Transformers 4.25.0.dev0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
hushell/pmf_metadataset_dino
|
hushell
| 2022-11-09T10:34:36Z | 0 | 5 | null |
[
"region:us"
] | null | 2022-11-04T14:13:14Z |
# Model checkpoints for [PMF](https://github.com/hushell/pmf_cvpr22)
NOTE: for DINO-small, peak VRAM is about 32GB; for DINO-base, peak VRAM is about 42GB.
Meta-testing with `dino_small_batch16` trained on full Meta-Dataset:
```
python -m torch.distributed.launch --nproc_per_node=8 --use_env test_meta_dataset.py --data-path ../../datasets/meta_dataset --dataset meta_dataset --arch dino_small_patch16 --deploy finetune --output outputs/md_full_dinosmall --resume md_full_128x128_dinosmall_fp16_lr5e-5/best.pth --dist-eval --ada_steps 100 --ada_lr 0.0001
```
Meta-testing with `dino_small_batch16` trained on ImageNet domain of Meta-Dataset:
```
python -m torch.distributed.launch --nproc_per_node=8 --use_env test_meta_dataset.py --data-path ../../datasets/meta_dataset --dataset meta_dataset --arch dino_small_patch16 --deploy finetune --output outputs/md_inet_dinosmall_6gpus --resume pmf_metadataset_dino/md_inet_128x128_dinosmall_fp16_lr5e-5/best.pth --dist-eval --ada_steps 100 --ada_lr 0.0001
```
## Results
The validated meta-test learning rate using 5 episodes for each domain is shown in the bracket.
Method |ILSVRC (test) |Omniglot |Aircraft |Birds |Textures |QuickDraw |Fungi |VGG Flower |Traffic signs |MSCOCO
---------------------------|---------------------------|---------------------------|---------------------------|---------------------------|---------------------------|---------------------------|---------------------------|---------------------------|---------------------------|---------------------------
[md_full_128x128_dinosmall_fp16_lr5e-5](https://huggingface.co/hushell/pmf_metadataset_dino/blob/main/md_full_128x128_dinosmall_fp16_lr5e-5/best.pth) |73.52±0.80 (lr=0.0001) |92.17±0.57 (lr=0.0001) |89.49±0.52 (lr=0.001) |91.04±0.37 (lr=0.0001) |85.73±0.62 (lr=0.001) |79.43±0.67 (lr=0.0001) |74.99±0.94 (lr=0) |95.30±0.44 (lr=0.001) |89.85±0.76 (lr=0.01) |59.69±1.02 (lr=0.001)
[md_inet_128x128_dinosmall_fp16_lr2e-4](https://huggingface.co/hushell/pmf_metadataset_dino/blob/main/md_imagenet_128x128_dinosmall_fp16_lr2e-4/best.pth) |75.51±0.72 (lr=0.001) |82.81±1.10 (lr=0.01) |78.38±1.09 (lr=0.01) |85.18±0.77 (lr=0.001) |86.95±0.60 (lr=0.001) |74.47±0.83 (lr=0.01) |55.16±1.09 (lr=0) |94.66±0.48 (lr=0) |90.04±0.81 (lr=0.01) |62.60±0.96 (lr=0.001)
|
Sennodipoi/lilt-distilroberta-base
|
Sennodipoi
| 2022-11-09T09:51:32Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"lilt",
"feature-extraction",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-02T10:40:57Z |
This model combines the base version of distilroberta + the standalone version of LiLT. It was created with the code available at the original LiLT repository https://github.com/jpWang/LiLT
The model can be used for fine-tuning in token classification tasks or visual question answering.
|
shafin/chemical-bert-uncased-finetuned-cust
|
shafin
| 2022-11-09T09:45:22Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-09T01:24:01Z |
---
tags:
- generated_from_trainer
model-index:
- name: chemical-bert-uncased-finetuned-cust
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chemical-bert-uncased-finetuned-cust
This model is a fine-tuned version of [recobo/chemical-bert-uncased](https://huggingface.co/recobo/chemical-bert-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7104
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.5876 | 1.0 | 63 | 2.7997 |
| 2.7843 | 2.0 | 126 | 2.3734 |
| 2.418 | 3.0 | 189 | 2.1510 |
| 2.2247 | 4.0 | 252 | 1.9822 |
| 2.062 | 5.0 | 315 | 1.8463 |
| 1.9875 | 6.0 | 378 | 1.8293 |
| 1.9034 | 7.0 | 441 | 1.7666 |
| 1.7818 | 8.0 | 504 | 1.6783 |
| 1.7131 | 9.0 | 567 | 1.5754 |
| 1.6793 | 10.0 | 630 | 1.5480 |
| 1.5773 | 11.0 | 693 | 1.4568 |
| 1.5391 | 12.0 | 756 | 1.5101 |
| 1.5049 | 13.0 | 819 | 1.4340 |
| 1.4476 | 14.0 | 882 | 1.4046 |
| 1.4032 | 15.0 | 945 | 1.3593 |
| 1.395 | 16.0 | 1008 | 1.3689 |
| 1.3353 | 17.0 | 1071 | 1.3350 |
| 1.3122 | 18.0 | 1134 | 1.2863 |
| 1.3036 | 19.0 | 1197 | 1.3690 |
| 1.2644 | 20.0 | 1260 | 1.1904 |
| 1.222 | 21.0 | 1323 | 1.1986 |
| 1.2091 | 22.0 | 1386 | 1.1650 |
| 1.2007 | 23.0 | 1449 | 1.1949 |
| 1.1456 | 24.0 | 1512 | 1.1649 |
| 1.1426 | 25.0 | 1575 | 1.1498 |
| 1.0883 | 26.0 | 1638 | 1.1489 |
| 1.0915 | 27.0 | 1701 | 1.1179 |
| 1.0635 | 28.0 | 1764 | 1.0726 |
| 1.0899 | 29.0 | 1827 | 1.1107 |
| 1.0251 | 30.0 | 1890 | 1.0944 |
| 1.0387 | 31.0 | 1953 | 1.0488 |
| 1.0037 | 32.0 | 2016 | 1.0679 |
| 1.0101 | 33.0 | 2079 | 1.0272 |
| 0.9595 | 34.0 | 2142 | 1.0158 |
| 0.9661 | 35.0 | 2205 | 1.0316 |
| 0.9535 | 36.0 | 2268 | 1.0086 |
| 0.9269 | 37.0 | 2331 | 1.0221 |
| 0.9395 | 38.0 | 2394 | 0.9626 |
| 0.9105 | 39.0 | 2457 | 0.9903 |
| 0.8888 | 40.0 | 2520 | 0.9892 |
| 0.9316 | 41.0 | 2583 | 0.9786 |
| 0.8804 | 42.0 | 2646 | 0.9938 |
| 0.8589 | 43.0 | 2709 | 1.0105 |
| 0.8573 | 44.0 | 2772 | 0.9729 |
| 0.8566 | 45.0 | 2835 | 0.9972 |
| 0.8392 | 46.0 | 2898 | 1.0085 |
| 0.8363 | 47.0 | 2961 | 0.9336 |
| 0.8184 | 48.0 | 3024 | 0.9886 |
| 0.7964 | 49.0 | 3087 | 0.9661 |
| 0.8025 | 50.0 | 3150 | 0.8956 |
| 0.8156 | 51.0 | 3213 | 0.9415 |
| 0.7906 | 52.0 | 3276 | 0.9381 |
| 0.7783 | 53.0 | 3339 | 0.9445 |
| 0.7696 | 54.0 | 3402 | 0.8859 |
| 0.763 | 55.0 | 3465 | 0.8851 |
| 0.7638 | 56.0 | 3528 | 0.9128 |
| 0.7576 | 57.0 | 3591 | 0.8629 |
| 0.757 | 58.0 | 3654 | 0.8917 |
| 0.7232 | 59.0 | 3717 | 0.8956 |
| 0.7327 | 60.0 | 3780 | 0.8727 |
| 0.7321 | 61.0 | 3843 | 0.8558 |
| 0.7131 | 62.0 | 3906 | 0.8876 |
| 0.696 | 63.0 | 3969 | 0.8872 |
| 0.6996 | 64.0 | 4032 | 0.7758 |
| 0.6807 | 65.0 | 4095 | 0.8657 |
| 0.6899 | 66.0 | 4158 | 0.8813 |
| 0.6873 | 67.0 | 4221 | 0.8488 |
| 0.6681 | 68.0 | 4284 | 0.8865 |
| 0.6758 | 69.0 | 4347 | 0.8447 |
| 0.6626 | 70.0 | 4410 | 0.8421 |
| 0.6535 | 71.0 | 4473 | 0.8313 |
| 0.6505 | 72.0 | 4536 | 0.8636 |
| 0.6654 | 73.0 | 4599 | 0.8433 |
| 0.6363 | 74.0 | 4662 | 0.7666 |
| 0.6395 | 75.0 | 4725 | 0.8882 |
| 0.6206 | 76.0 | 4788 | 0.8409 |
| 0.6365 | 77.0 | 4851 | 0.8807 |
| 0.6325 | 78.0 | 4914 | 0.8012 |
| 0.6142 | 79.0 | 4977 | 0.7705 |
| 0.6108 | 80.0 | 5040 | 0.8270 |
| 0.62 | 81.0 | 5103 | 0.8552 |
| 0.6188 | 82.0 | 5166 | 0.8377 |
| 0.6024 | 83.0 | 5229 | 0.7985 |
| 0.631 | 84.0 | 5292 | 0.8352 |
| 0.5871 | 85.0 | 5355 | 0.8086 |
| 0.6014 | 86.0 | 5418 | 0.8129 |
| 0.5842 | 87.0 | 5481 | 0.8649 |
| 0.5837 | 88.0 | 5544 | 0.8269 |
| 0.5958 | 89.0 | 5607 | 0.8407 |
| 0.564 | 90.0 | 5670 | 0.7906 |
| 0.5748 | 91.0 | 5733 | 0.7393 |
| 0.5918 | 92.0 | 5796 | 0.8445 |
| 0.5682 | 93.0 | 5859 | 0.8073 |
| 0.5497 | 94.0 | 5922 | 0.8165 |
| 0.5606 | 95.0 | 5985 | 0.7638 |
| 0.5593 | 96.0 | 6048 | 0.7929 |
| 0.5556 | 97.0 | 6111 | 0.7991 |
| 0.5604 | 98.0 | 6174 | 0.7417 |
| 0.5503 | 99.0 | 6237 | 0.8070 |
| 0.5561 | 100.0 | 6300 | 0.7845 |
| 0.5344 | 101.0 | 6363 | 0.7933 |
| 0.5209 | 102.0 | 6426 | 0.7741 |
| 0.5337 | 103.0 | 6489 | 0.7760 |
| 0.5437 | 104.0 | 6552 | 0.7634 |
| 0.5165 | 105.0 | 6615 | 0.7543 |
| 0.5343 | 106.0 | 6678 | 0.7661 |
| 0.5155 | 107.0 | 6741 | 0.7953 |
| 0.512 | 108.0 | 6804 | 0.8253 |
| 0.5259 | 109.0 | 6867 | 0.7570 |
| 0.5045 | 110.0 | 6930 | 0.7977 |
| 0.5115 | 111.0 | 6993 | 0.7598 |
| 0.5134 | 112.0 | 7056 | 0.7680 |
| 0.5076 | 113.0 | 7119 | 0.7696 |
| 0.5126 | 114.0 | 7182 | 0.7451 |
| 0.4963 | 115.0 | 7245 | 0.7923 |
| 0.5032 | 116.0 | 7308 | 0.7842 |
| 0.5137 | 117.0 | 7371 | 0.7239 |
| 0.488 | 118.0 | 7434 | 0.8188 |
| 0.4938 | 119.0 | 7497 | 0.7479 |
| 0.4866 | 120.0 | 7560 | 0.7761 |
| 0.4901 | 121.0 | 7623 | 0.7930 |
| 0.4877 | 122.0 | 7686 | 0.7733 |
| 0.4858 | 123.0 | 7749 | 0.7492 |
| 0.4813 | 124.0 | 7812 | 0.7645 |
| 0.4817 | 125.0 | 7875 | 0.7938 |
| 0.4822 | 126.0 | 7938 | 0.7253 |
| 0.4771 | 127.0 | 8001 | 0.7481 |
| 0.4769 | 128.0 | 8064 | 0.7402 |
| 0.4666 | 129.0 | 8127 | 0.7993 |
| 0.474 | 130.0 | 8190 | 0.7653 |
| 0.4718 | 131.0 | 8253 | 0.7524 |
| 0.4682 | 132.0 | 8316 | 0.7129 |
| 0.4698 | 133.0 | 8379 | 0.7806 |
| 0.4669 | 134.0 | 8442 | 0.7237 |
| 0.4401 | 135.0 | 8505 | 0.7185 |
| 0.4656 | 136.0 | 8568 | 0.7542 |
| 0.4569 | 137.0 | 8631 | 0.7412 |
| 0.4751 | 138.0 | 8694 | 0.7740 |
| 0.4474 | 139.0 | 8757 | 0.7636 |
| 0.4652 | 140.0 | 8820 | 0.7958 |
| 0.4539 | 141.0 | 8883 | 0.7410 |
| 0.4452 | 142.0 | 8946 | 0.7652 |
| 0.4516 | 143.0 | 9009 | 0.7337 |
| 0.4423 | 144.0 | 9072 | 0.7601 |
| 0.4542 | 145.0 | 9135 | 0.7692 |
| 0.4328 | 146.0 | 9198 | 0.7528 |
| 0.4503 | 147.0 | 9261 | 0.7673 |
| 0.4416 | 148.0 | 9324 | 0.7193 |
| 0.447 | 149.0 | 9387 | 0.7517 |
| 0.4434 | 150.0 | 9450 | 0.7241 |
| 0.4374 | 151.0 | 9513 | 0.7281 |
| 0.4334 | 152.0 | 9576 | 0.7150 |
| 0.4209 | 153.0 | 9639 | 0.7531 |
| 0.4405 | 154.0 | 9702 | 0.7252 |
| 0.4384 | 155.0 | 9765 | 0.7367 |
| 0.4265 | 156.0 | 9828 | 0.7111 |
| 0.4386 | 157.0 | 9891 | 0.7215 |
| 0.4276 | 158.0 | 9954 | 0.7119 |
| 0.4289 | 159.0 | 10017 | 0.7587 |
| 0.4415 | 160.0 | 10080 | 0.7935 |
| 0.4315 | 161.0 | 10143 | 0.7574 |
| 0.4227 | 162.0 | 10206 | 0.7296 |
| 0.4352 | 163.0 | 10269 | 0.7145 |
| 0.4108 | 164.0 | 10332 | 0.7133 |
| 0.433 | 165.0 | 10395 | 0.7369 |
| 0.4336 | 166.0 | 10458 | 0.7471 |
| 0.4016 | 167.0 | 10521 | 0.7329 |
| 0.4164 | 168.0 | 10584 | 0.7331 |
| 0.4182 | 169.0 | 10647 | 0.7449 |
| 0.4136 | 170.0 | 10710 | 0.7365 |
| 0.4183 | 171.0 | 10773 | 0.7248 |
| 0.4225 | 172.0 | 10836 | 0.7346 |
| 0.4294 | 173.0 | 10899 | 0.7099 |
| 0.4113 | 174.0 | 10962 | 0.7264 |
| 0.4216 | 175.0 | 11025 | 0.6822 |
| 0.4208 | 176.0 | 11088 | 0.7198 |
| 0.407 | 177.0 | 11151 | 0.7266 |
| 0.4164 | 178.0 | 11214 | 0.7466 |
| 0.4112 | 179.0 | 11277 | 0.7409 |
| 0.4067 | 180.0 | 11340 | 0.7058 |
| 0.4297 | 181.0 | 11403 | 0.6918 |
| 0.4137 | 182.0 | 11466 | 0.7432 |
| 0.4102 | 183.0 | 11529 | 0.7272 |
| 0.4184 | 184.0 | 11592 | 0.7309 |
| 0.4049 | 185.0 | 11655 | 0.7215 |
| 0.4097 | 186.0 | 11718 | 0.7375 |
| 0.419 | 187.0 | 11781 | 0.7575 |
| 0.4122 | 188.0 | 11844 | 0.7481 |
| 0.4089 | 189.0 | 11907 | 0.7790 |
| 0.4094 | 190.0 | 11970 | 0.7547 |
| 0.4107 | 191.0 | 12033 | 0.7390 |
| 0.4044 | 192.0 | 12096 | 0.7472 |
| 0.4065 | 193.0 | 12159 | 0.7283 |
| 0.4172 | 194.0 | 12222 | 0.7112 |
| 0.4124 | 195.0 | 12285 | 0.7470 |
| 0.4026 | 196.0 | 12348 | 0.7067 |
| 0.4179 | 197.0 | 12411 | 0.7259 |
| 0.4027 | 198.0 | 12474 | 0.7328 |
| 0.4101 | 199.0 | 12537 | 0.6891 |
| 0.3969 | 200.0 | 12600 | 0.7104 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
pig4431/CR_DistilBERT_5E
|
pig4431
| 2022-11-09T09:11:59Z | 100 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-09T09:08:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: CR_DistilBERT_5E
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CR_DistilBERT_5E
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3663
- Accuracy: 0.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6345 | 0.33 | 50 | 0.5656 | 0.66 |
| 0.4704 | 0.66 | 100 | 0.3705 | 0.82 |
| 0.3428 | 0.99 | 150 | 0.3186 | 0.8867 |
| 0.2272 | 1.32 | 200 | 0.2871 | 0.9 |
| 0.259 | 1.66 | 250 | 0.2975 | 0.8867 |
| 0.2583 | 1.99 | 300 | 0.3125 | 0.8867 |
| 0.1713 | 2.32 | 350 | 0.3146 | 0.8867 |
| 0.181 | 2.65 | 400 | 0.3602 | 0.8867 |
| 0.1868 | 2.98 | 450 | 0.3319 | 0.8933 |
| 0.1521 | 3.31 | 500 | 0.3413 | 0.8867 |
| 0.1153 | 3.64 | 550 | 0.3868 | 0.88 |
| 0.1238 | 3.97 | 600 | 0.3686 | 0.8867 |
| 0.1104 | 4.3 | 650 | 0.3674 | 0.8867 |
| 0.0881 | 4.64 | 700 | 0.3750 | 0.8867 |
| 0.1247 | 4.97 | 750 | 0.3663 | 0.9 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.3.2
- Tokenizers 0.13.1
|
pig4431/CR_BERT_5E
|
pig4431
| 2022-11-09T08:58:42Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-09T08:56:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: CR_BERT_5E
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CR_BERT_5E
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5094
- Accuracy: 0.8733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.694 | 0.33 | 50 | 0.5894 | 0.6733 |
| 0.5335 | 0.66 | 100 | 0.4150 | 0.84 |
| 0.3446 | 0.99 | 150 | 0.3052 | 0.9 |
| 0.241 | 1.32 | 200 | 0.3409 | 0.8733 |
| 0.2536 | 1.66 | 250 | 0.3101 | 0.88 |
| 0.2318 | 1.99 | 300 | 0.3015 | 0.8867 |
| 0.1527 | 2.32 | 350 | 0.3806 | 0.8733 |
| 0.1026 | 2.65 | 400 | 0.3788 | 0.8733 |
| 0.1675 | 2.98 | 450 | 0.3956 | 0.8933 |
| 0.0699 | 3.31 | 500 | 0.4532 | 0.8867 |
| 0.0848 | 3.64 | 550 | 0.4636 | 0.88 |
| 0.0991 | 3.97 | 600 | 0.4951 | 0.88 |
| 0.0578 | 4.3 | 650 | 0.5073 | 0.88 |
| 0.0636 | 4.64 | 700 | 0.5090 | 0.8733 |
| 0.0531 | 4.97 | 750 | 0.5094 | 0.8733 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.3.2
- Tokenizers 0.13.1
|
mijantscher/twitter-setfit-v1
|
mijantscher
| 2022-11-09T08:42:38Z | 2 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-09T08:42:21Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 160 with parameters:
```
{'batch_size': 20, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 160,
"warmup_steps": 16,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
pig4431/CR_fewshot
|
pig4431
| 2022-11-09T08:19:05Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-09T08:18:53Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 80 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 80,
"warmup_steps": 8,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
OFA-Sys/ofa-base-vqa-fairseq-version
|
OFA-Sys
| 2022-11-09T08:07:17Z | 0 | 5 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-08-11T14:08:01Z |
---
license: apache-2.0
---
# OFA-Base-VQA
This is the official checkpoint (adaptive to the official code instead of Huggingface Transformers) of OFA-Base finetuned on VQA 2.0.
For more information, please refer to the official github ([https://github.com/OFA-Sys/OFA](https://github.com/OFA-Sys/OFA))
Temporarily, we only provide the finetuned checkpoints based on the official code.
|
OFA-Sys/ofa-base-refcoco-fairseq-version
|
OFA-Sys
| 2022-11-09T08:04:27Z | 0 | 2 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-08-11T13:39:10Z |
---
license: apache-2.0
---
# OFA-Base-RefCOCO
This is the official checkpoint (adaptive to the official code instead of Huggingface Transformers) of OFA-Base finetuned on RefCOCO for visual grounding.
For more information, please refer to the official github ([https://github.com/OFA-Sys/OFA](https://github.com/OFA-Sys/OFA))
Temporarily, we only provide the finetuned checkpoints based on the official code.
|
BLENDER100-MAX/FOX
|
BLENDER100-MAX
| 2022-11-09T07:19:51Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2022-11-09T07:19:51Z |
---
license: bigscience-openrail-m
---
|
huggingtweets/mumukshusavitri
|
huggingtweets
| 2022-11-09T06:57:30Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-09T06:53:30Z |
---
language: en
thumbnail: http://www.huggingtweets.com/mumukshusavitri/1667977046540/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1588132608243773441/zuQl_2d7_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Savitri Mumukshu - सावित्री मुमुक्षु</div>
<div style="text-align: center; font-size: 14px;">@mumukshusavitri</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Savitri Mumukshu - सावित्री मुमुक्षु.
| Data | Savitri Mumukshu - सावित्री मुमुक्षु |
| --- | --- |
| Tweets downloaded | 3238 |
| Retweets | 123 |
| Short tweets | 640 |
| Tweets kept | 2475 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/21w2o0rg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mumukshusavitri's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2m3kx4jk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2m3kx4jk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mumukshusavitri')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
studio-ousia/luke-japanese-base-lite
|
studio-ousia
| 2022-11-09T06:22:22Z | 2,731 | 8 |
transformers
|
[
"transformers",
"pytorch",
"luke",
"fill-mask",
"named entity recognition",
"entity typing",
"relation classification",
"question answering",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-25T09:27:16Z |
---
language: ja
thumbnail: https://github.com/studio-ousia/luke/raw/master/resources/luke_logo.png
tags:
- luke
- named entity recognition
- entity typing
- relation classification
- question answering
license: apache-2.0
---
## luke-japanese
**luke-japanese** is the Japanese version of **LUKE** (**L**anguage
**U**nderstanding with **K**nowledge-based **E**mbeddings), a pre-trained
_knowledge-enhanced_ contextualized representation of words and entities. LUKE
treats words and entities in a given text as independent tokens, and outputs
contextualized representations of them. Please refer to our
[GitHub repository](https://github.com/studio-ousia/luke) for more details and
updates.
This model is a lightweight version which does not contain Wikipedia entity
embeddings. Please use the
[full version](https://huggingface.co/studio-ousia/luke-japanese-base/) for
tasks that use Wikipedia entities as inputs.
**luke-japanese**は、単語とエンティティの知識拡張型訓練済み Transformer モデル**LUKE**の日本語版です。LUKE は単語とエンティティを独立したトークンとして扱い、これらの文脈を考慮した表現を出力します。詳細については、[GitHub リポジトリ](https://github.com/studio-ousia/luke)を参照してください。
このモデルは、Wikipedia エンティティのエンベディングを含まない軽量版のモデルです。Wikipedia エンティティを入力として使うタスクには、[full version](https://huggingface.co/studio-ousia/luke-japanese-base/)を使用してください。
### Experimental results on JGLUE
The experimental results evaluated on the dev set of
[JGLUE](https://github.com/yahoojapan/JGLUE) are shown as follows:
| Model | MARC-ja | JSTS | JNLI | JCommonsenseQA |
| ---------------------- | --------- | ------------------- | --------- | -------------- |
| | acc | Pearson/Spearman | acc | acc |
| **LUKE Japanese base** | **0.965** | **0.916**/**0.877** | **0.912** | **0.842** |
| _Baselines:_ | |
| Tohoku BERT base | 0.958 | 0.909/0.868 | 0.899 | 0.808 |
| NICT BERT base | 0.958 | 0.910/0.871 | 0.902 | 0.823 |
| Waseda RoBERTa base | 0.962 | 0.913/0.873 | 0.895 | 0.840 |
| XLM RoBERTa base | 0.961 | 0.877/0.831 | 0.893 | 0.687 |
The baseline scores are obtained from
[here](https://github.com/yahoojapan/JGLUE/blob/a6832af23895d6faec8ecf39ec925f1a91601d62/README.md).
### Citation
```latex
@inproceedings{yamada2020luke,
title={LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention},
author={Ikuya Yamada and Akari Asai and Hiroyuki Shindo and Hideaki Takeda and Yuji Matsumoto},
booktitle={EMNLP},
year={2020}
}
```
|
Ahmed87/bert-cased-ner-fcit499
|
Ahmed87
| 2022-11-09T05:59:01Z | 121 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-09T04:39:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-cased-ner-fcit499
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9417409184372858
- name: Recall
type: recall
value: 0.950207468879668
- name: F1
type: f1
value: 0.9459552495697073
- name: Accuracy
type: accuracy
value: 0.9905416329830234
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-cased-ner-fcit499
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0404
- Precision: 0.9417
- Recall: 0.9502
- F1: 0.9460
- Accuracy: 0.9905
## Model description
More information neededx
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 157 | 0.0578 | 0.8782 | 0.8976 | 0.8878 | 0.9825 |
| No log | 2.0 | 314 | 0.0425 | 0.9317 | 0.9343 | 0.9330 | 0.9885 |
| No log | 3.0 | 471 | 0.0391 | 0.9381 | 0.9433 | 0.9407 | 0.9897 |
| 0.1097 | 4.0 | 628 | 0.0397 | 0.9377 | 0.9467 | 0.9422 | 0.9900 |
| 0.1097 | 5.0 | 785 | 0.0404 | 0.9417 | 0.9502 | 0.9460 | 0.9905 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
thisisHJLee/wav2vec2-large-xls-r-1b-korean-convsen1
|
thisisHJLee
| 2022-11-09T05:57:23Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-09T01:25:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-1b-korean-convsen1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1b-korean-convsen1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0014
- Cer: 0.0002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.3161 | 1.0 | 1762 | 0.1495 | 0.0443 |
| 0.1188 | 2.0 | 3524 | 0.0125 | 0.0033 |
| 0.0399 | 3.0 | 5286 | 0.0014 | 0.0002 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.11.0
|
jamescalam/minilm-arxiv-encoder
|
jamescalam
| 2022-11-09T05:15:38Z | 5 | 3 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-09T02:57:58Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
bigmorning/whisper3_0020
|
bigmorning
| 2022-11-09T04:45:24Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-09T04:45:15Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: whisper3_0020
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper3_0020
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1844
- Train Accuracy: 0.0334
- Validation Loss: 0.5619
- Validation Accuracy: 0.0313
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 5.0832 | 0.0116 | 4.4298 | 0.0124 | 0 |
| 4.3130 | 0.0131 | 4.0733 | 0.0141 | 1 |
| 3.9211 | 0.0146 | 3.6762 | 0.0157 | 2 |
| 3.5505 | 0.0159 | 3.3453 | 0.0171 | 3 |
| 3.1592 | 0.0175 | 2.8062 | 0.0199 | 4 |
| 2.2581 | 0.0220 | 1.7622 | 0.0252 | 5 |
| 1.4671 | 0.0259 | 1.2711 | 0.0276 | 6 |
| 1.0779 | 0.0278 | 1.0220 | 0.0288 | 7 |
| 0.8591 | 0.0290 | 0.8836 | 0.0295 | 8 |
| 0.7159 | 0.0297 | 0.7918 | 0.0300 | 9 |
| 0.6105 | 0.0304 | 0.7276 | 0.0303 | 10 |
| 0.5287 | 0.0309 | 0.6850 | 0.0306 | 11 |
| 0.4614 | 0.0313 | 0.6472 | 0.0308 | 12 |
| 0.4049 | 0.0317 | 0.6199 | 0.0310 | 13 |
| 0.3562 | 0.0320 | 0.6019 | 0.0311 | 14 |
| 0.3139 | 0.0324 | 0.5868 | 0.0311 | 15 |
| 0.2766 | 0.0326 | 0.5751 | 0.0312 | 16 |
| 0.2438 | 0.0329 | 0.5701 | 0.0312 | 17 |
| 0.2116 | 0.0332 | 0.5686 | 0.0313 | 18 |
| 0.1844 | 0.0334 | 0.5619 | 0.0313 | 19 |
### Framework versions
- Transformers 4.25.0.dev0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Fuji1995/fuji-test
|
Fuji1995
| 2022-11-09T03:46:28Z | 186 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-09T03:39:05Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: fuji-test
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0
---
# fuji-test
Description
## Example Images
#### corgi

|
hoaiht/CLIP-ViT-H-14-laion2B-s32B-b79K
|
hoaiht
| 2022-11-09T03:05:42Z | 26 | 2 |
open_clip
|
[
"open_clip",
"pytorch",
"clip",
"arxiv:1910.04867",
"license:mit",
"region:us"
] | null | 2022-11-08T10:18:55Z |
---
license: mit
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
---
# Model Card for CLIP ViT-H/14 - LAION-2B
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Details](#training-details)
4. [Evaluation](#evaluation)
5. [Acknowledgements](#acknowledgements)
6. [Citation](#citation)
7. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
A CLIP ViT-H/14 model trained with the LAION-2B English subset of LAION-5B (https://laion.ai/blog/laion-5b/) using OpenCLIP (https://github.com/mlfoundations/open_clip).
Model training done by Romain Beaumont on the [stability.ai](https://stability.ai/) cluster.
# Uses
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset.
## Direct Use
Zero-shot image classification, image and text retrieval, among others.
## Downstream Use
Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.
## Out-of-Scope Use
As per the OpenAI models,
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below.
# Training Details
## Training Data
This model was trained with the 2 Billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/).
**IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress.
## Training Procedure
Please see [training notes](https://docs.google.com/document/d/1EFbMLRWSSV0LUf9Du1pWzWqgeiIRPwEWX2s1C6mAk5c) and [wandb logs](https://wandb.ai/rom1504/eval_openclip/reports/H-14--VmlldzoyNDAxODQ3).
# Evaluation
Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark).
## Testing Data, Factors & Metrics
### Testing Data
The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval.
**TODO** - more detail
## Results
The model achieves a 78.0 zero-shot top-1 accuracy on ImageNet-1k.
An initial round of benchmarks have been performed on a wider range of datasets, currently viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb
**TODO** - create table for just this model's metrics.
# Acknowledgements
Acknowledging [stability.ai](https://stability.ai/) for the compute used to train this model.
# Citation
**BibTeX:**
In addition to forthcoming LAION-5B (https://laion.ai/blog/laion-5b/) paper, please cite:
OpenAI CLIP paper
```
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
OpenCLIP software
```
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
# How to Get Started with the Model
Use the code below to get started with the model.
** TODO ** - Hugging Face transformers, OpenCLIP, and timm getting started snippets
|
Alred/distilbert-base-uncased-finetuned-squad-ver1
|
Alred
| 2022-11-09T02:54:54Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-09T02:30:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad-ver1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad-ver1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6175 | 1.0 | 554 | 1.8621 |
| 1.1951 | 2.0 | 1108 | 1.8669 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
osanseviero/test_sb3_is_working2
|
osanseviero
| 2022-11-09T02:20:56Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-09T02:20:32Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -152.88 +/- 29.97
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
studio-ousia/luke-japanese-large
|
studio-ousia
| 2022-11-09T02:18:56Z | 66,623 | 9 |
transformers
|
[
"transformers",
"pytorch",
"luke",
"fill-mask",
"named entity recognition",
"entity typing",
"relation classification",
"question answering",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-07T14:25:53Z |
---
language: ja
thumbnail: https://github.com/studio-ousia/luke/raw/master/resources/luke_logo.png
tags:
- luke
- named entity recognition
- entity typing
- relation classification
- question answering
license: apache-2.0
---
## luke-japanese-large
**luke-japanese** is the Japanese version of **LUKE** (**L**anguage
**U**nderstanding with **K**nowledge-based **E**mbeddings), a pre-trained
_knowledge-enhanced_ contextualized representation of words and entities. LUKE
treats words and entities in a given text as independent tokens, and outputs
contextualized representations of them. Please refer to our
[GitHub repository](https://github.com/studio-ousia/luke) for more details and
updates.
This model contains Wikipedia entity embeddings which are not used in general
NLP tasks. Please use the
[lite version](https://huggingface.co/studio-ousia/luke-japanese-large-lite/)
for tasks that do not use Wikipedia entities as inputs.
**luke-japanese**は、単語とエンティティの知識拡張型訓練済み Transformer モデル**LUKE**の日本語版です。LUKE は単語とエンティティを独立したトークンとして扱い、これらの文脈を考慮した表現を出力します。詳細については、[GitHub リポジトリ](https://github.com/studio-ousia/luke)を参照してください。
このモデルは、通常の NLP タスクでは使われない Wikipedia エンティティのエンベディングを含んでいます。単語の入力のみを使うタスクには、[lite version](https://huggingface.co/studio-ousia/luke-japanese-large-lite/)を使用してください。
### Experimental results on JGLUE
The experimental results evaluated on the dev set of
[JGLUE](https://github.com/yahoojapan/JGLUE) is shown as follows:
| Model | MARC-ja | JSTS | JNLI | JCommonsenseQA |
| ----------------------------- | --------- | ------------------- | --------- | -------------- |
| | acc | Pearson/Spearman | acc | acc |
| **LUKE Japanese large** | **0.965** | **0.932**/**0.902** | **0.927** | 0.893 |
| _Baselines:_ | |
| Tohoku BERT large | 0.955 | 0.913/0.872 | 0.900 | 0.816 |
| Waseda RoBERTa large (seq128) | 0.954 | 0.930/0.896 | 0.924 | **0.907** |
| Waseda RoBERTa large (seq512) | 0.961 | 0.926/0.892 | 0.926 | 0.891 |
| XLM RoBERTa large | 0.964 | 0.918/0.884 | 0.919 | 0.840 |
The baseline scores are obtained from
[here](https://github.com/yahoojapan/JGLUE/blob/a6832af23895d6faec8ecf39ec925f1a91601d62/README.md).
### Citation
```latex
@inproceedings{yamada2020luke,
title={LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention},
author={Ikuya Yamada and Akari Asai and Hiroyuki Shindo and Hideaki Takeda and Yuji Matsumoto},
booktitle={EMNLP},
year={2020}
}
```
|
Tod14/Blacky_Gray
|
Tod14
| 2022-11-09T02:02:04Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-09T02:02:04Z |
---
license: creativeml-openrail-m
---
|
Signorlimone/Laikafy
|
Signorlimone
| 2022-11-09T01:51:30Z | 0 | 5 | null |
[
"region:us"
] | null | 2022-11-06T20:36:29Z |
use laikafy for the model to kick in. I suggest to use [laikafy:10] otherwise it often generates the same model, instead putting the token between brackets followed by :10 will start to use the token after 10 samples. For that reason I usually put 50 samples
|
lusscios/min
|
lusscios
| 2022-11-09T01:26:45Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-09T01:26:45Z |
---
license: creativeml-openrail-m
---
|
Devarshi/Brain_Tumor_Class_swin
|
Devarshi
| 2022-11-09T00:32:34Z | 204 | 1 |
transformers
|
[
"transformers",
"pytorch",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-08T10:47:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: Brain_Tumor_Class_swin
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9936204146730463
- name: F1
type: f1
value: 0.9936204146730463
- name: Recall
type: recall
value: 0.9936204146730463
- name: Precision
type: precision
value: 0.9936204146730463
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Brain_Tumor_Class_swin
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0220
- Accuracy: 0.9936
- F1: 0.9936
- Recall: 0.9936
- Precision: 0.9936
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.1248 | 1.0 | 220 | 0.0610 | 0.9767 | 0.9767 | 0.9767 | 0.9767 |
| 0.0887 | 2.0 | 440 | 0.0300 | 0.9920 | 0.9920 | 0.9920 | 0.9920 |
| 0.0449 | 3.0 | 660 | 0.0220 | 0.9936 | 0.9936 | 0.9936 | 0.9936 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
bieu/WeeBoo-Diffusion
|
bieu
| 2022-11-08T23:15:38Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-07T21:36:18Z |
---
license: creativeml-openrail-m
---
**_WeeBoo Diffusion_** is a model made for **creating characters and backgrounds**
**in model 1**
you can do things in **anime, cartoon, manga, novel**
in 2 you will be able to do in **_addition to the characters, varied things like backgrounds and more complex art styles, try_**
|
kalpeshk2011/rankgen-t5-xl-pg19
|
kalpeshk2011
| 2022-11-08T22:45:49Z | 160 | 3 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"feature-extraction",
"contrastive learning",
"ranking",
"decoding",
"metric learning",
"text generation",
"retrieval",
"custom_code",
"en",
"dataset:Wikipedia",
"dataset:PG19",
"dataset:C4",
"dataset:relic",
"dataset:ChapterBreak",
"dataset:HellaSwag",
"dataset:ROCStories",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-07-20T04:40:35Z |
---
language:
- en
thumbnail: "https://pbs.twimg.com/media/FThx_rEWAAEoujW?format=jpg&name=medium"
tags:
- t5
- contrastive learning
- ranking
- decoding
- metric learning
- pytorch
- text generation
- retrieval
license: "apache-2.0"
datasets:
- Wikipedia
- PG19
- C4
- relic
- ChapterBreak
- HellaSwag
- ROCStories
metrics:
- MAUVE
- human
---
## Main repository
https://github.com/martiansideofthemoon/rankgen
## What is RankGen?
RankGen is a suite of encoder models (100M-1.2B parameters) which map prefixes and generations from any pretrained English language model to a shared vector space. RankGen can be used to rerank multiple full-length samples from an LM, and it can also be incorporated as a scoring function into beam search to significantly improve generation quality (0.85 vs 0.77 MAUVE, 75% preference according to humans annotators who are English writers). RankGen can also be used like a dense retriever, and achieves state-of-the-art performance on [literary retrieval](https://relic.cs.umass.edu/leaderboard.html).
## Setup
**Requirements** (`pip` will install these dependencies for you)
Python 3.7+, `torch` (CUDA recommended), `transformers`
**Installation**
```
python3.7 -m virtualenv rankgen-venv
source rankgen-venv/bin/activate
pip install rankgen
```
Get the data [here](https://drive.google.com/drive/folders/1DRG2ess7fK3apfB-6KoHb_azMuHbsIv4?usp=sharing) and place folder in root directory. Alternatively, use `gdown` as shown below,
```
gdown --folder https://drive.google.com/drive/folders/1DRG2ess7fK3apfB-6KoHb_azMuHbsIv4
```
Run the test script to make sure the RankGen checkpoint has loaded correctly,
```
python -m rankgen.test_rankgen_encoder --model_path kalpeshk2011/rankgen-t5-base-all
### Expected output
0.0009239262409127233
0.0011521980725477804
```
## Using RankGen
Loading RankGen is simple using the HuggingFace APIs (see Method-2 below), but we suggest using [`RankGenEncoder`](https://github.com/martiansideofthemoon/rankgen/blob/master/rankgen/rankgen_encoder.py), which is a small wrapper around the HuggingFace APIs for correctly preprocessing data and doing tokenization automatically. You can either download [our repository](https://github.com/martiansideofthemoon/rankgen) and install the API, or copy the implementation from [below](#rankgenencoder-implementation).
#### [SUGGESTED] Method-1: Loading the model with RankGenEncoder
```
from rankgen import RankGenEncoder, RankGenGenerator
rankgen_encoder = RankGenEncoder("kalpeshk2011/rankgen-t5-xl-pg19")
# Encoding vectors
prefix_vectors = rankgen_encoder.encode(["This is a prefix sentence."], vectors_type="prefix")
suffix_vectors = rankgen_encoder.encode(["This is a suffix sentence."], vectors_type="suffix")
# Generating text
# use a HuggingFace compatible language model
generator = RankGenGenerator(rankgen_encoder=rankgen_encoder, language_model="gpt2-medium")
inputs = ["Whatever might be the nature of the tragedy it would be over with long before this, and those moving black spots away yonder to the west, that he had discerned from the bluff, were undoubtedly the departing raiders. There was nothing left for Keith to do except determine the fate of the unfortunates, and give their bodies decent burial. That any had escaped, or yet lived, was altogether unlikely, unless, perchance, women had been in the party, in which case they would have been borne away prisoners."]
# Baseline nucleus sampling
print(generator.generate_single(inputs, top_p=0.9)[0][0])
# Over-generate and re-rank
print(generator.overgenerate_rerank(inputs, top_p=0.9, num_samples=10)[0][0])
# Beam search
print(generator.beam_search(inputs, top_p=0.9, num_samples=10, beam_size=2)[0][0])
```
#### Method-2: Loading the model with HuggingFace APIs
```
from transformers import T5Tokenizer, AutoModel
tokenizer = T5Tokenizer.from_pretrained(f"google/t5-v1_1-xl")
model = AutoModel.from_pretrained("kalpeshk2011/rankgen-t5-xl-pg19", trust_remote_code=True)
```
### RankGenEncoder Implementation
```
import tqdm
from transformers import T5Tokenizer, T5EncoderModel, AutoModel
class RankGenEncoder():
def __init__(self, model_path, max_batch_size=32, model_size=None, cache_dir=None):
assert model_path in ["kalpeshk2011/rankgen-t5-xl-all", "kalpeshk2011/rankgen-t5-xl-pg19", "kalpeshk2011/rankgen-t5-base-all", "kalpeshk2011/rankgen-t5-large-all"]
self.max_batch_size = max_batch_size
self.device = 'cuda' if torch.cuda.is_available() else 'cpu'
if model_size is None:
if "t5-large" in model_path or "t5_large" in model_path:
self.model_size = "large"
elif "t5-xl" in model_path or "t5_xl" in model_path:
self.model_size = "xl"
else:
self.model_size = "base"
else:
self.model_size = model_size
self.tokenizer = T5Tokenizer.from_pretrained(f"google/t5-v1_1-{self.model_size}", cache_dir=cache_dir)
self.model = AutoModel.from_pretrained(model_path, trust_remote_code=True)
self.model.to(self.device)
self.model.eval()
def encode(self, inputs, vectors_type="prefix", verbose=False, return_input_ids=False):
tokenizer = self.tokenizer
max_batch_size = self.max_batch_size
if isinstance(inputs, str):
inputs = [inputs]
if vectors_type == 'prefix':
inputs = ['pre ' + input for input in inputs]
max_length = 512
else:
inputs = ['suffi ' + input for input in inputs]
max_length = 128
all_embeddings = []
all_input_ids = []
for i in tqdm.tqdm(range(0, len(inputs), max_batch_size), total=(len(inputs) // max_batch_size) + 1, disable=not verbose, desc=f"Encoding {vectors_type} inputs:"):
tokenized_inputs = tokenizer(inputs[i:i + max_batch_size], return_tensors="pt", padding=True)
for k, v in tokenized_inputs.items():
tokenized_inputs[k] = v[:, :max_length]
tokenized_inputs = tokenized_inputs.to(self.device)
with torch.inference_mode():
batch_embeddings = self.model(**tokenized_inputs)
all_embeddings.append(batch_embeddings)
if return_input_ids:
all_input_ids.extend(tokenized_inputs.input_ids.cpu().tolist())
return {
"embeddings": torch.cat(all_embeddings, dim=0),
"input_ids": all_input_ids
}
```
|
kalpeshk2011/rankgen-t5-base-all
|
kalpeshk2011
| 2022-11-08T22:45:41Z | 160 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"feature-extraction",
"contrastive learning",
"ranking",
"decoding",
"metric learning",
"text generation",
"retrieval",
"custom_code",
"en",
"dataset:Wikipedia",
"dataset:PG19",
"dataset:C4",
"dataset:relic",
"dataset:ChapterBreak",
"dataset:HellaSwag",
"dataset:ROCStories",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-07-20T01:35:22Z |
---
language:
- en
thumbnail: "https://pbs.twimg.com/media/FThx_rEWAAEoujW?format=jpg&name=medium"
tags:
- t5
- contrastive learning
- ranking
- decoding
- metric learning
- pytorch
- text generation
- retrieval
license: "apache-2.0"
datasets:
- Wikipedia
- PG19
- C4
- relic
- ChapterBreak
- HellaSwag
- ROCStories
metrics:
- MAUVE
- human
---
## Main repository
https://github.com/martiansideofthemoon/rankgen
## What is RankGen?
RankGen is a suite of encoder models (100M-1.2B parameters) which map prefixes and generations from any pretrained English language model to a shared vector space. RankGen can be used to rerank multiple full-length samples from an LM, and it can also be incorporated as a scoring function into beam search to significantly improve generation quality (0.85 vs 0.77 MAUVE, 75% preference according to humans annotators who are English writers). RankGen can also be used like a dense retriever, and achieves state-of-the-art performance on [literary retrieval](https://relic.cs.umass.edu/leaderboard.html).
## Setup
**Requirements** (`pip` will install these dependencies for you)
Python 3.7+, `torch` (CUDA recommended), `transformers`
**Installation**
```
python3.7 -m virtualenv rankgen-venv
source rankgen-venv/bin/activate
pip install rankgen
```
Get the data [here](https://drive.google.com/drive/folders/1DRG2ess7fK3apfB-6KoHb_azMuHbsIv4?usp=sharing) and place folder in root directory. Alternatively, use `gdown` as shown below,
```
gdown --folder https://drive.google.com/drive/folders/1DRG2ess7fK3apfB-6KoHb_azMuHbsIv4
```
Run the test script to make sure the RankGen checkpoint has loaded correctly,
```
python -m rankgen.test_rankgen_encoder --model_path kalpeshk2011/rankgen-t5-base-all
### Expected output
0.0009239262409127233
0.0011521980725477804
```
## Using RankGen
Loading RankGen is simple using the HuggingFace APIs (see Method-2 below), but we suggest using [`RankGenEncoder`](https://github.com/martiansideofthemoon/rankgen/blob/master/rankgen/rankgen_encoder.py), which is a small wrapper around the HuggingFace APIs for correctly preprocessing data and doing tokenization automatically. You can either download [our repository](https://github.com/martiansideofthemoon/rankgen) and install the API, or copy the implementation from [below](#rankgenencoder-implementation).
#### [SUGGESTED] Method-1: Loading the model with RankGenEncoder
```
from rankgen import RankGenEncoder, RankGenGenerator
rankgen_encoder = RankGenEncoder("kalpeshk2011/rankgen-t5-base-all")
# Encoding vectors
prefix_vectors = rankgen_encoder.encode(["This is a prefix sentence."], vectors_type="prefix")
suffix_vectors = rankgen_encoder.encode(["This is a suffix sentence."], vectors_type="suffix")
# Generating text
# use a HuggingFace compatible language model
generator = RankGenGenerator(rankgen_encoder=rankgen_encoder, language_model="gpt2-medium")
inputs = ["Whatever might be the nature of the tragedy it would be over with long before this, and those moving black spots away yonder to the west, that he had discerned from the bluff, were undoubtedly the departing raiders. There was nothing left for Keith to do except determine the fate of the unfortunates, and give their bodies decent burial. That any had escaped, or yet lived, was altogether unlikely, unless, perchance, women had been in the party, in which case they would have been borne away prisoners."]
# Baseline nucleus sampling
print(generator.generate_single(inputs, top_p=0.9)[0][0])
# Over-generate and re-rank
print(generator.overgenerate_rerank(inputs, top_p=0.9, num_samples=10)[0][0])
# Beam search
print(generator.beam_search(inputs, top_p=0.9, num_samples=10, beam_size=2)[0][0])
```
#### Method-2: Loading the model with HuggingFace APIs
```
from transformers import T5Tokenizer, AutoModel
tokenizer = T5Tokenizer.from_pretrained(f"google/t5-v1_1-base")
model = AutoModel.from_pretrained("kalpeshk2011/rankgen-t5-base-all", trust_remote_code=True)
```
### RankGenEncoder Implementation
```
import tqdm
from transformers import T5Tokenizer, T5EncoderModel, AutoModel
class RankGenEncoder():
def __init__(self, model_path, max_batch_size=32, model_size=None, cache_dir=None):
assert model_path in ["kalpeshk2011/rankgen-t5-xl-all", "kalpeshk2011/rankgen-t5-xl-pg19", "kalpeshk2011/rankgen-t5-base-all", "kalpeshk2011/rankgen-t5-large-all"]
self.max_batch_size = max_batch_size
self.device = 'cuda' if torch.cuda.is_available() else 'cpu'
if model_size is None:
if "t5-large" in model_path or "t5_large" in model_path:
self.model_size = "large"
elif "t5-xl" in model_path or "t5_xl" in model_path:
self.model_size = "xl"
else:
self.model_size = "base"
else:
self.model_size = model_size
self.tokenizer = T5Tokenizer.from_pretrained(f"google/t5-v1_1-{self.model_size}", cache_dir=cache_dir)
self.model = AutoModel.from_pretrained(model_path, trust_remote_code=True)
self.model.to(self.device)
self.model.eval()
def encode(self, inputs, vectors_type="prefix", verbose=False, return_input_ids=False):
tokenizer = self.tokenizer
max_batch_size = self.max_batch_size
if isinstance(inputs, str):
inputs = [inputs]
if vectors_type == 'prefix':
inputs = ['pre ' + input for input in inputs]
max_length = 512
else:
inputs = ['suffi ' + input for input in inputs]
max_length = 128
all_embeddings = []
all_input_ids = []
for i in tqdm.tqdm(range(0, len(inputs), max_batch_size), total=(len(inputs) // max_batch_size) + 1, disable=not verbose, desc=f"Encoding {vectors_type} inputs:"):
tokenized_inputs = tokenizer(inputs[i:i + max_batch_size], return_tensors="pt", padding=True)
for k, v in tokenized_inputs.items():
tokenized_inputs[k] = v[:, :max_length]
tokenized_inputs = tokenized_inputs.to(self.device)
with torch.inference_mode():
batch_embeddings = self.model(**tokenized_inputs)
all_embeddings.append(batch_embeddings)
if return_input_ids:
all_input_ids.extend(tokenized_inputs.input_ids.cpu().tolist())
return {
"embeddings": torch.cat(all_embeddings, dim=0),
"input_ids": all_input_ids
}
```
|
AlekseyKorshuk/dalio-6.7b-test
|
AlekseyKorshuk
| 2022-11-08T22:11:24Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-08T21:02:49Z |
---
license: other
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dalio-6.7b-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dalio-6.7b-test
This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6641
- Accuracy: 0.0662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.5958 | 0.31 | 16 | 2.5371 | 0.0659 |
| 2.3784 | 0.62 | 32 | 2.5039 | 0.0670 |
| 2.3578 | 0.92 | 48 | 2.6074 | 0.0654 |
| 1.3819 | 1.23 | 64 | 2.6680 | 0.0658 |
| 1.1529 | 1.54 | 80 | 2.6738 | 0.0665 |
| 1.2938 | 1.85 | 96 | 2.6641 | 0.0662 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
victorbahlangene/roberta-base-fine-Disaster-Tweets-Part3
|
victorbahlangene
| 2022-11-08T21:52:14Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-08T21:41:40Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-base-fine-Disaster-Tweets-Part3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-fine-Disaster-Tweets-Part3
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3882
- Accuracy: 0.8380
- F1: 0.8377
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 203 | 0.4632 | 0.8179 | 0.8184 |
| No log | 2.0 | 406 | 0.3882 | 0.8380 | 0.8377 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
victorbahlangene/xlnet-base-cased-fine-Disaster-Tweets-Part3
|
victorbahlangene
| 2022-11-08T21:38:10Z | 92 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlnet",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-08T21:26:56Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlnet-base-cased-fine-Disaster-Tweets-Part3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased-fine-Disaster-Tweets-Part3
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3924
- Accuracy: 0.8468
- F1: 0.8467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 203 | 0.4457 | 0.8257 | 0.8253 |
| No log | 2.0 | 406 | 0.3924 | 0.8468 | 0.8467 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
santiagoahl/vit_model_santiago_ahumada
|
santiagoahl
| 2022-11-08T20:28:28Z | 189 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-08T18:52:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: vit_model_santiago_ahumada
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_model_santiago_ahumada
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0164
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.143 | 3.85 | 500 | 0.0164 | 1.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
huggingtweets/big___oven-codeinecucumber
|
huggingtweets
| 2022-11-08T19:32:56Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-25T19:41:48Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1579203041764442116/RSLookYD_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1571653458972794884/eaxhUsib_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Gutted & oskcar</div>
<div style="text-align: center; font-size: 14px;">@big___oven-codeinecucumber</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Gutted & oskcar.
| Data | Gutted | oskcar |
| --- | --- | --- |
| Tweets downloaded | 1761 | 2669 |
| Retweets | 243 | 635 |
| Short tweets | 326 | 308 |
| Tweets kept | 1192 | 1726 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1qyf2pl5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @big___oven-codeinecucumber's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2rr9twhn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2rr9twhn/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/big___oven-codeinecucumber')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
MarcosDib/ModeloTesteDib
|
MarcosDib
| 2022-11-08T18:45:01Z | 0 | 0 | null |
[
"exbert",
"en",
"license:mit",
"region:us"
] | null | 2022-11-08T18:42:08Z |
---
language: en
tags:
- exbert
license: mit
---
# GPT-2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
ArafatBHossain/bert_uncased_fine_tuned_emotion_dataset
|
ArafatBHossain
| 2022-11-08T18:17:50Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-14T05:38:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_uncased_fine_tuned_emotion_dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_uncased_fine_tuned_emotion_dataset
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1870
- Accuracy: 0.943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2321 | 1.0 | 2000 | 0.2690 | 0.924 |
| 0.1483 | 2.0 | 4000 | 0.1683 | 0.9415 |
| 0.0954 | 3.0 | 6000 | 0.1870 | 0.943 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
aorhan/ddpm-butterflies-128
|
aorhan
| 2022-11-08T17:09:51Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-08T16:38:49Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/aorhan/ddpm-butterflies-128/tensorboard?#scalars)
|
PaulaAlfy/xlm-roberta-base-finetuned-panx-all
|
PaulaAlfy
| 2022-11-08T16:56:34Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-08T16:22:18Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1528
- F1: 0.8734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2634 | 1.0 | 525 | 0.1602 | 0.8258 |
| 0.1316 | 2.0 | 1050 | 0.1454 | 0.8471 |
| 0.089 | 3.0 | 1575 | 0.1430 | 0.8555 |
| 0.0596 | 4.0 | 2100 | 0.1430 | 0.8676 |
| 0.0393 | 5.0 | 2625 | 0.1528 | 0.8734 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
harmonai/honk-140k
|
harmonai
| 2022-11-08T16:42:42Z | 9 | 1 |
diffusers
|
[
"diffusers",
"audio-generation",
"license:mit",
"diffusers:DanceDiffusionPipeline",
"region:us"
] | null | 2022-10-20T12:20:05Z |
---
license: mit
tags:
- audio-generation
---
[Dance Diffusion](https://github.com/Harmonai-org/sample-generator) is now available in 🧨 Diffusers.
## FP32
```python
# !pip install diffusers[torch] accelerate scipy
from diffusers import DiffusionPipeline
from scipy.io.wavfile import write
model_id = "harmonai/honk-140k"
pipe = DiffusionPipeline.from_pretrained(model_id)
pipe = pipe.to("cuda")
audios = pipe(audio_length_in_s=4.0).audios
# To save locally
for i, audio in enumerate(audios):
write(f"test_{i}.wav", pipe.unet.sample_rate, audio.transpose())
# To dislay in google colab
import IPython.display as ipd
for audio in audios:
display(ipd.Audio(audio, rate=pipe.unet.sample_rate))
```
## FP16
Faster at a small loss of quality
```python
# !pip install diffusers[torch] accelerate scipy
from diffusers import DiffusionPipeline
from scipy.io.wavfile import write
import torch
model_id = "harmonai/honk-140k"
pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
audios = pipeline(audio_length_in_s=4.0).audios
# To save locally
for i, audio in enumerate(audios):
write(f"{i}.wav", pipe.unet.sample_rate, audio.transpose())
# To dislay in google colab
import IPython.display as ipd
for audio in audios:
display(ipd.Audio(audio, rate=pipe.unet.sample_rate))
```
|
espnet/iam_handwriting_ocr
|
espnet
| 2022-11-08T16:28:56Z | 4 | 7 |
espnet
|
[
"espnet",
"image-to-text",
"ocr",
"handwriting-recognition",
"en",
"dataset:iam",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
image-to-text
| 2022-11-04T17:05:39Z |
---
tags:
- espnet
- image-to-text
- ocr
- handwriting-recognition
language: en
datasets:
- iam
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/iam_handwriting_ocr`
This model was trained by kenzheng99 using iam recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 2169367022b8939d22005e8cf45a65bb20bc0768
pip install -e .
cd egs2/iam/ocr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/iam_handwriting_ocr
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon Nov 7 13:40:17 EST 2022`
- python version: `3.7.13 (default, Mar 29 2022, 02:18:16) [GCC 7.5.0]`
- espnet version: `espnet 202209`
- pytorch version: `pytorch 1.10.0`
- Git hash: `2169367022b8939d22005e8cf45a65bb20bc0768`
- Commit date: `Thu Nov 3 20:38:03 2022 -0400`
## asr_train_asr_conformer_extracted_en_char
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_asr_model_valid.acc.ave/test|2915|25932|80.5|17.3|2.2|0.8|20.3|72.8|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_asr_model_valid.acc.ave/test|2915|125616|94.0|4.2|1.8|0.7|6.7|72.8|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_conformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_extracted_en_char
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 35197
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 200
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 64
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_extracted_en_char/train/speech_shape
- exp/asr_stats_extracted_en_char/train/text_shape.char
valid_shape_file:
- exp/asr_stats_extracted_en_char/valid/speech_shape
- exp/asr_stats_extracted_en_char/valid/text_shape.char
batch_type: folded
valid_batch_type: null
fold_length:
- 800
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/extracted/train/feats.scp
- speech
- kaldi_ark
- - dump/extracted/train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/extracted/valid/feats.scp
- speech
- kaldi_ark
- - dump/extracted/valid/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.002
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 15000
token_list:
- <blank>
- <unk>
- <space>
- e
- t
- a
- o
- n
- i
- r
- s
- h
- l
- d
- c
- u
- m
- f
- p
- g
- y
- w
- b
- .
- ','
- v
- k
- '-'
- T
- ''''
- M
- I
- A
- '"'
- S
- P
- H
- B
- C
- W
- N
- G
- x
- R
- E
- L
- F
- '0'
- D
- '1'
- j
- O
- q
- U
- K
- '!'
- '3'
- '9'
- (
- z
- )
- ':'
- V
- ;
- '5'
- '2'
- J
- '8'
- Y
- '4'
- '6'
- '?'
- '#'
- '&'
- '7'
- /
- '*'
- Q
- X
- Z
- +
- <sos/eos>
init: xavier_uniform
input_size: 100
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf: null
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
frontend: null
frontend_conf: {}
specaug: null
specaug_conf: {}
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_extracted_en_char/train/feats_stats.npz
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 1024
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: '202209'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
PaulaAlfy/xlm-roberta-base-finetuned-panx-de-fr
|
PaulaAlfy
| 2022-11-08T15:55:07Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-08T15:16:21Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1907
- F1: 0.8682
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2901 | 1.0 | 715 | 0.1864 | 0.8211 |
| 0.1576 | 2.0 | 1430 | 0.1667 | 0.8441 |
| 0.1038 | 3.0 | 2145 | 0.1710 | 0.8452 |
| 0.0701 | 4.0 | 2860 | 0.1787 | 0.8636 |
| 0.0449 | 5.0 | 3575 | 0.1907 | 0.8682 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
bigmorning/whisper_0015
|
bigmorning
| 2022-11-08T14:43:13Z | 32 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-08T14:42:41Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: whisper_0015
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_0015
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3281
- Train Accuracy: 0.0322
- Validation Loss: 0.5841
- Validation Accuracy: 0.0311
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 5.0856 | 0.0116 | 4.4440 | 0.0123 | 0 |
| 4.3149 | 0.0131 | 4.0521 | 0.0142 | 1 |
| 3.9260 | 0.0146 | 3.7264 | 0.0153 | 2 |
| 3.5418 | 0.0160 | 3.3026 | 0.0174 | 3 |
| 2.7510 | 0.0198 | 2.0157 | 0.0241 | 4 |
| 1.6782 | 0.0250 | 1.3567 | 0.0273 | 5 |
| 1.1705 | 0.0274 | 1.0678 | 0.0286 | 6 |
| 0.9126 | 0.0287 | 0.9152 | 0.0294 | 7 |
| 0.7514 | 0.0296 | 0.8057 | 0.0299 | 8 |
| 0.6371 | 0.0302 | 0.7409 | 0.0302 | 9 |
| 0.5498 | 0.0307 | 0.6854 | 0.0306 | 10 |
| 0.4804 | 0.0312 | 0.6518 | 0.0307 | 11 |
| 0.4214 | 0.0316 | 0.6200 | 0.0310 | 12 |
| 0.3713 | 0.0319 | 0.5947 | 0.0311 | 13 |
| 0.3281 | 0.0322 | 0.5841 | 0.0311 | 14 |
### Framework versions
- Transformers 4.25.0.dev0
- TensorFlow 2.9.2
- Tokenizers 0.13.2
|
rosamondthalken/t5-base-sci-names
|
rosamondthalken
| 2022-11-08T14:39:36Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"scientific names",
"text generation",
"en",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-16T15:00:05Z |
---
language:
- en
tags:
- scientific names
- text generation
license: cc-by-sa-4.0
---
# t5-base-sci-names
Biodiversity literature is dedicated to the identification, documentation, and categorization of plants, fungi, animals, and other living organisms. Correctly extracting the name of an organism within these documents involves finding the entire scientific name–including the genus, specific epithet, and author name. Extracting these names allows biologists to access documents about a species more comprehensively, and to track an organism’s history of documentation, which includes biological changes and changes in how scientists describe them.
**t5-base-sci-names** uses advances in text-to-text generation to generate scientific names and authors from biodiversity literature. This model was trained on hand-labeled biodiversity texts, including labeled information about a mentioned organism's genus (abbreviated and expanded), specific epithet, and author. This model was trained to output 0-N scientific names with specific prefixes (e.g. "genus = " or "epithet = ") and performs best with anywhere from 20-120 words.
You can also use the model in this tutorial for [scientific names generation](https://colab.research.google.com/drive/1GEpnCaMJYiPIhuZiDJ1X1pZsGtGSm8Ds?usp=sharing).
Thanks to Damon Little and Nelson Salinas at the New York Botanical Gardens for their support.
*Note that this model is still a work in progress. Any feedback is welcome.*
|
bigmorning/whisper_0010
|
bigmorning
| 2022-11-08T14:20:50Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-08T14:19:44Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: whisper_0010
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_0010
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6371
- Train Accuracy: 0.0302
- Validation Loss: 0.7409
- Validation Accuracy: 0.0302
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 5.0856 | 0.0116 | 4.4440 | 0.0123 | 0 |
| 4.3149 | 0.0131 | 4.0521 | 0.0142 | 1 |
| 3.9260 | 0.0146 | 3.7264 | 0.0153 | 2 |
| 3.5418 | 0.0160 | 3.3026 | 0.0174 | 3 |
| 2.7510 | 0.0198 | 2.0157 | 0.0241 | 4 |
| 1.6782 | 0.0250 | 1.3567 | 0.0273 | 5 |
| 1.1705 | 0.0274 | 1.0678 | 0.0286 | 6 |
| 0.9126 | 0.0287 | 0.9152 | 0.0294 | 7 |
| 0.7514 | 0.0296 | 0.8057 | 0.0299 | 8 |
| 0.6371 | 0.0302 | 0.7409 | 0.0302 | 9 |
### Framework versions
- Transformers 4.25.0.dev0
- TensorFlow 2.9.2
- Tokenizers 0.13.2
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.