modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 12:31:00
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 12:28:53
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
redstonehero/epicphotogasm_v1
|
redstonehero
| 2023-08-23T20:47:36Z | 32 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-23T19:35:58Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
ushnahabbasi99/distilhubert-finetuned-gtzan
|
ushnahabbasi99
| 2023-08-23T20:46:01Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-18T22:26:54Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.79
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6405
- Accuracy: 0.79
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9208 | 1.0 | 150 | 1.7528 | 0.52 |
| 1.0745 | 2.0 | 300 | 1.2385 | 0.6 |
| 0.8249 | 3.0 | 450 | 0.8622 | 0.79 |
| 0.6652 | 4.0 | 600 | 0.9211 | 0.72 |
| 0.4782 | 5.0 | 750 | 0.6200 | 0.8 |
| 0.2865 | 6.0 | 900 | 0.6526 | 0.76 |
| 0.1781 | 7.0 | 1050 | 0.5741 | 0.82 |
| 0.1675 | 8.0 | 1200 | 0.5487 | 0.82 |
| 0.0497 | 9.0 | 1350 | 0.6100 | 0.8 |
| 0.0813 | 10.0 | 1500 | 0.6405 | 0.79 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ardt-multipart/ardt-multipart-combo_train_walker2d_v2-2308_1950-33
|
ardt-multipart
| 2023-08-23T20:38:08Z | 34 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-23T18:51:36Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-multipart-combo_train_walker2d_v2-2308_1950-33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-multipart-combo_train_walker2d_v2-2308_1950-33
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
JJinBBangMan/bert-finetuned-ner
|
JJinBBangMan
| 2023-08-23T20:36:27Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-23T19:36:40Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9341386728446136
- name: Recall
type: recall
value: 0.9500168293503871
- name: F1
type: f1
value: 0.9420108468919483
- name: Accuracy
type: accuracy
value: 0.9867398598928593
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0606
- Precision: 0.9341
- Recall: 0.9500
- F1: 0.9420
- Accuracy: 0.9867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0801 | 1.0 | 1756 | 0.0727 | 0.9047 | 0.9325 | 0.9184 | 0.9814 |
| 0.0403 | 2.0 | 3512 | 0.0574 | 0.9293 | 0.9483 | 0.9387 | 0.9860 |
| 0.0245 | 3.0 | 5268 | 0.0606 | 0.9341 | 0.9500 | 0.9420 | 0.9867 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
wjbmattingly/dnaBERT-k07-w10
|
wjbmattingly
| 2023-08-23T20:35:09Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-08-15T08:21:31Z |
---
license: mit
widget:
- TTGGGAT TGGGATG GGGATGA GGATGAT GATGATA ATGATAT TGATATT GATATTG ATATTGA <mask>
- ATTGATG TTGATGT TGATGTT GATGTTG ATGTTGG TGTTGGA GTTGGAG TTGGAGT TGGAGTT <mask>
- GAGTTGT AGTTGTG GTTGTGT TTGTGTG TGTGTGT GTGTGTA TGTGTAG GTGTAGA TGTAGAT <mask>
- TAGATAA AGATAAT GATAATT ATAATTA TAATTAG AATTAGG ATTAGGA TTAGGAT TAGGATT <mask>
---
|
arpan-das-astrophysics/a2c-PandaReachDense-v2
|
arpan-das-astrophysics
| 2023-08-23T20:10:30Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-05T20:44:27Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.51 +/- 0.48
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
|
Zmu/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
|
Zmu
| 2023-08-23T20:03:06Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:MIT/ast-finetuned-audioset-10-10-0.4593",
"base_model:finetune:MIT/ast-finetuned-audioset-10-10-0.4593",
"license:bsd-3-clause",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-23T18:38:15Z |
---
license: bsd-3-clause
base_model: MIT/ast-finetuned-audioset-10-10-0.4593
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.91
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2797
- Accuracy: 0.91
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7205 | 1.0 | 56 | 0.7984 | 0.77 |
| 0.3329 | 1.99 | 112 | 0.5558 | 0.83 |
| 0.1958 | 2.99 | 168 | 0.5639 | 0.81 |
| 0.0955 | 4.0 | 225 | 0.4130 | 0.85 |
| 0.0683 | 5.0 | 281 | 0.4681 | 0.87 |
| 0.0012 | 5.99 | 337 | 0.3278 | 0.89 |
| 0.0016 | 6.99 | 393 | 0.3064 | 0.92 |
| 0.0005 | 8.0 | 450 | 0.2827 | 0.91 |
| 0.0533 | 9.0 | 506 | 0.2788 | 0.91 |
| 0.0002 | 9.96 | 560 | 0.2797 | 0.91 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
liadraz/ppo-PyramidsRND1
|
liadraz
| 2023-08-23T19:59:46Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-08-23T19:57:53Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: liadraz/ppo-PyramidsRND1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kejolong/adawong
|
kejolong
| 2023-08-23T19:58:29Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-23T19:55:00Z |
---
license: creativeml-openrail-m
---
|
juancopi81/whisper-small-dv
|
juancopi81
| 2023-08-23T19:49:33Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-17T22:28:57Z |
---
language:
- dv
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - Juan Carlos Pineros HF Class
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 11.119031887888166
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - Juan Carlos Pineros HF Class
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2937
- Wer Ortho: 56.7101
- Wer: 11.1190
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1203 | 1.63 | 500 | 0.1687 | 62.7551 | 13.3724 |
| 0.0464 | 3.26 | 1000 | 0.1757 | 58.8899 | 12.0997 |
| 0.0327 | 4.89 | 1500 | 0.1931 | 59.0919 | 11.8510 |
| 0.0118 | 6.51 | 2000 | 0.2349 | 58.2492 | 11.4042 |
| 0.007 | 8.14 | 2500 | 0.2606 | 57.7408 | 11.5259 |
| 0.0056 | 9.77 | 3000 | 0.2759 | 57.4413 | 11.0564 |
| 0.0038 | 11.4 | 3500 | 0.2785 | 57.2185 | 10.9956 |
| 0.0039 | 13.03 | 4000 | 0.2937 | 56.7101 | 11.1190 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mrkusypl/EwaIvona
|
mrkusypl
| 2023-08-23T19:48:54Z | 0 | 0 | null |
[
"pl",
"region:us"
] | null | 2023-08-23T19:21:30Z |
---
language:
- pl
---
<center>
<img src="https://www.pcworld.pl/g1/ftp/thumbnails/pc/1/0/ivona_jpg_80_adaptiveresize_750x420.webp"></img>
<h1>Syntezator mowy Ivona - Ewa (RVC v2) (Harvest) (675 Epochs)</h1>
**Model by:** kusy <br/>
**Voice Actor:** Syntezator mowy Ivona - Ewa <br/>
**Dataset:** 00:28:37 <br/>
<audio controls>
<source src="https://cdn.discordapp.com/attachments/1143991454868459550/1143991606941339648/example.mp3" type="audio/mpeg">
</audio><br />
<audio controls>
<source src="https://cdn.discordapp.com/attachments/1143991454868459550/1143991643440160881/gadanie.wav" type="audio/wav">
</audio>
<a href="https://huggingface.co/mrkusypl/EwaIvona/resolve/main/Ewa%20Ivona%20%5B675%20epoch%20%2B%20RVC%20v2%5D.zip">Download or copy the link</a>
</center>
|
strombergnlp/dant5-small
|
strombergnlp
| 2023-08-23T19:48:36Z | 177 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"arxiv:2208.12097",
"doi:10.57967/hf/0012",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-23T14:36:37Z |
# dant5-small
---
language:
- da
language_bcp47:
- da
- da-bornholm
- da-synnejyl
tags:
- t5
license: cc-by-4.0
datasets:
- dagw
widget:
- text: "Aarhus er Danmarks <extra_id_0>.<extra_id_2>"
co2_eq_emissions:
training_type: "pretraining"
geographical_location: "Copenhagen, Denmark"
hardware_used: "4 A100 GPUs, 91 training hours"
emissions: 23660
---
`dant5-small` is a 60M parameter model with architecture identical to `t5-small`. Training details are given in the paper [Training a T5 Using Lab-sized Resources](https://arxiv.org/abs/2208.12097). It was trained for 10 epochs on the Danigh GigaWord Corpus ([official website](https://gigaword.dk), [paper](https://aclanthology.org/2021.nodalida-main.46/)).
## To use the model
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
model_name = "strombergnlp/dant5-small"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
original_text = "Aarhus er Danmarks <extra_id_0> landets ældste. Under navnet Aros, som betyder å-munding, optræder den i skriftlige kilder i 900-tallet, men <extra_id_1> historie tilbage til 700-tallet.<extra_id_2>"
original_label = "<extra_id_0> næststørste by og en af <extra_id_1> arkæologiske fund fører dens <extra_id_2>"
input_ids = tokenizer(original_text, return_tensors="pt").input_ids
labels = tokenizer(original_label, return_tensors="pt").input_ids
loss = model(input_ids=input_ids, labels=labels).loss
print(f"Original text: {original_text}")
print(f"Original label: {original_label}")
print(f"Loss for the original label is {loss.item()}")
sequence_ids = model.generate(input_ids)
sequences = tokenizer.batch_decode(sequence_ids)
print(f"A sample generated continuation: ")
print(sequences[0])
```
You should see output similar to:
```
Original text: Aarhus er Danmarks <extra_id_0> landets ældste. Under navnet Aros, som betyder å-munding, optræder den i skriftlige kilder i 900-tallet, men <extra_id_1> historie tilbage til 700-tallet.<extra_id_2>
Original label: <extra_id_0> næststørste by og en af <extra_id_1> arkæologiske fund fører dens <extra_id_2>
Loss for the original label is 3.383681297302246
A sample generated continuation:
<pad><extra_id_0> ældste og<extra_id_1> har sin<extra_id_2> Aarhus er Danmarks ældste<extra_id_3></s>
```
|
zerophinx/minor_empire
|
zerophinx
| 2023-08-23T19:48:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-23T19:44:28Z |
Minor Empire Model 200 Epochs V2
|
s-nlp/bert-base-uncased-stsb-TTM
|
s-nlp
| 2023-08-23T19:37:26Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"custom_code",
"autotrain_compatible",
"region:us"
] |
text-classification
| 2023-08-22T14:03:57Z |
---
metrics:
- spearmanr
- pearsonr
---
## Model Overview
It is a TT-compressed model of bert-base-uncased stsb model.
Model was trained on STSB corpus with 0.87 combined score, and TTM-compressed with additional finetuning up to 58% (64 mln params) of original size with 0.843 score.
## How to use
```python
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModelForSequenceClassification.from_pretrained("s-nlp/bert-base-uncased-stsb-TTM", trust_remote_code=True)
```
---
license: other
---
|
dkqjrm/20230824023615
|
dkqjrm
| 2023-08-23T19:36:36Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-23T17:36:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230824023615'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230824023615
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0725
- Accuracy: 0.7365
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 312 | 0.6124 | 0.5271 |
| 0.3459 | 2.0 | 624 | 0.2937 | 0.4729 |
| 0.3459 | 3.0 | 936 | 0.4930 | 0.4693 |
| 0.2482 | 4.0 | 1248 | 0.1965 | 0.4693 |
| 0.2242 | 5.0 | 1560 | 0.2537 | 0.4693 |
| 0.2242 | 6.0 | 1872 | 0.1661 | 0.5632 |
| 0.2359 | 7.0 | 2184 | 0.1414 | 0.6570 |
| 0.2359 | 8.0 | 2496 | 0.1893 | 0.5018 |
| 0.2404 | 9.0 | 2808 | 0.1265 | 0.6173 |
| 0.2198 | 10.0 | 3120 | 0.1214 | 0.6679 |
| 0.2198 | 11.0 | 3432 | 0.1352 | 0.6029 |
| 0.1657 | 12.0 | 3744 | 0.1030 | 0.7040 |
| 0.1472 | 13.0 | 4056 | 0.1043 | 0.6931 |
| 0.1472 | 14.0 | 4368 | 0.1011 | 0.7004 |
| 0.1408 | 15.0 | 4680 | 0.1111 | 0.7148 |
| 0.1408 | 16.0 | 4992 | 0.1046 | 0.6931 |
| 0.1321 | 17.0 | 5304 | 0.0964 | 0.7004 |
| 0.1285 | 18.0 | 5616 | 0.1019 | 0.7220 |
| 0.1285 | 19.0 | 5928 | 0.0927 | 0.7256 |
| 0.1244 | 20.0 | 6240 | 0.0972 | 0.7004 |
| 0.1191 | 21.0 | 6552 | 0.0947 | 0.7076 |
| 0.1191 | 22.0 | 6864 | 0.0983 | 0.7184 |
| 0.1129 | 23.0 | 7176 | 0.1029 | 0.7040 |
| 0.1129 | 24.0 | 7488 | 0.0993 | 0.7112 |
| 0.1115 | 25.0 | 7800 | 0.0933 | 0.7076 |
| 0.1079 | 26.0 | 8112 | 0.1092 | 0.6931 |
| 0.1079 | 27.0 | 8424 | 0.0837 | 0.7437 |
| 0.105 | 28.0 | 8736 | 0.0825 | 0.7256 |
| 0.1049 | 29.0 | 9048 | 0.0809 | 0.7148 |
| 0.1049 | 30.0 | 9360 | 0.0924 | 0.7256 |
| 0.1021 | 31.0 | 9672 | 0.0820 | 0.7292 |
| 0.1021 | 32.0 | 9984 | 0.0793 | 0.7256 |
| 0.099 | 33.0 | 10296 | 0.0820 | 0.7365 |
| 0.0966 | 34.0 | 10608 | 0.0831 | 0.7184 |
| 0.0966 | 35.0 | 10920 | 0.0796 | 0.7256 |
| 0.0928 | 36.0 | 11232 | 0.0790 | 0.7292 |
| 0.0888 | 37.0 | 11544 | 0.0953 | 0.7256 |
| 0.0888 | 38.0 | 11856 | 0.0791 | 0.7437 |
| 0.0905 | 39.0 | 12168 | 0.0849 | 0.7473 |
| 0.0905 | 40.0 | 12480 | 0.0782 | 0.7401 |
| 0.0872 | 41.0 | 12792 | 0.0754 | 0.7292 |
| 0.0853 | 42.0 | 13104 | 0.0770 | 0.7365 |
| 0.0853 | 43.0 | 13416 | 0.0742 | 0.7473 |
| 0.0843 | 44.0 | 13728 | 0.0764 | 0.7220 |
| 0.0826 | 45.0 | 14040 | 0.0765 | 0.7256 |
| 0.0826 | 46.0 | 14352 | 0.0746 | 0.7365 |
| 0.0811 | 47.0 | 14664 | 0.0736 | 0.7292 |
| 0.0811 | 48.0 | 14976 | 0.0824 | 0.7292 |
| 0.079 | 49.0 | 15288 | 0.0749 | 0.7401 |
| 0.0783 | 50.0 | 15600 | 0.0734 | 0.7401 |
| 0.0783 | 51.0 | 15912 | 0.0740 | 0.7401 |
| 0.0806 | 52.0 | 16224 | 0.0749 | 0.7365 |
| 0.078 | 53.0 | 16536 | 0.0729 | 0.7365 |
| 0.078 | 54.0 | 16848 | 0.0728 | 0.7401 |
| 0.0764 | 55.0 | 17160 | 0.0722 | 0.7437 |
| 0.0764 | 56.0 | 17472 | 0.0745 | 0.7365 |
| 0.0766 | 57.0 | 17784 | 0.0730 | 0.7329 |
| 0.0751 | 58.0 | 18096 | 0.0725 | 0.7401 |
| 0.0751 | 59.0 | 18408 | 0.0730 | 0.7365 |
| 0.0765 | 60.0 | 18720 | 0.0725 | 0.7365 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
MattStammers/ppo-QbertNoFrameskip-v4
|
MattStammers
| 2023-08-23T19:30:13Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"QbertNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-23T19:28:48Z |
---
library_name: stable-baselines3
tags:
- QbertNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: QbertNoFrameskip-v4
type: QbertNoFrameskip-v4
metrics:
- type: mean_reward
value: 16300.00 +/- 1892.09
name: mean_reward
verified: false
---
# **PPO** Agent playing **QbertNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **QbertNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env QbertNoFrameskip-v4 -orga MattStammers -f logs/
python -m rl_zoo3.enjoy --algo ppo --env QbertNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env QbertNoFrameskip-v4 -orga MattStammers -f logs/
python -m rl_zoo3.enjoy --algo ppo --env QbertNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env QbertNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env QbertNoFrameskip-v4 -f logs/ -orga MattStammers
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.1'),
('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('learning_rate', 'lin_2.5e-4'),
('n_envs', 8),
('n_epochs', 4),
('n_steps', 128),
('n_timesteps', 10000000.0),
('normalize', False),
('policy', 'CnnPolicy'),
('vf_coef', 0.5)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
asenella/ms_config_1_alpha_90_beta_50_seed_2
|
asenella
| 2023-08-23T19:19:42Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-08-23T19:19:40Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Akibub/jennysmith3
|
Akibub
| 2023-08-23T19:07:14Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-23T18:53:43Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### jennysmith3 Dreambooth model trained by Akibub with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
ardt-multipart/ardt-multipart-robust_train_walker2d_v3-2308_1824-99
|
ardt-multipart
| 2023-08-23T18:49:44Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-23T17:26:25Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-multipart-robust_train_walker2d_v3-2308_1824-99
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-multipart-robust_train_walker2d_v3-2308_1824-99
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
artyomboyko/whisper-tiny-finetuned-minds14
|
artyomboyko
| 2023-08-23T18:43:37Z | 83 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-23T16:53:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-finetuned-minds14
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: minds14
type: PolyAI/minds14
config: en-US
split: train[450:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.33884297520661155
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-finetuned-minds14
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7159
- Wer Ortho: 0.3461
- Wer: 0.3388
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0011 | 17.86 | 500 | 0.7159 | 0.3461 | 0.3388 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
fp16-guy/Sweet-mix_fp16_cleaned
|
fp16-guy
| 2023-08-23T18:40:50Z | 0 | 0 | null |
[
"text-to-image",
"region:us"
] |
text-to-image
| 2023-08-11T14:13:24Z |
---
pipeline_tag: text-to-image
---
Sweet-mix, but fp16/cleaned - smaller size, same result.
========
///
**[**original checkpoint link**](https://civitai.com/models/18927/sweet-mix)**
*(all rights to the model belong to Manseo)*
---
*[*grid 01*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/sweet-mix%2001%2020230811104443-111-sweetMix_v21-Euler%20a-6.png) *(1.99gb 2.1 version)*
*[*grid 02*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/sweet-mix%2002%2020230811104555-111-sweetMix_v21-Euler%20a-6.png) *(1.83gb 2.1 version - no vae)*
*[*grid 03*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/sweet-mix%2020-flat%2001%2020230812175855-111-sweetMix_v20Flat-Euler%20a-6.png) *(1.99gb 2.0-flat version)*
*[*grid 04*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/sweet-mix%2020-flat%2002%2020230812180157-111-sweetMix_v20Flat-Euler%20a-6.png) *(1.83gb 2.0-flat version - no vae)*
|
922-CA/kacpdw-gfl-rvc2-tests
|
922-CA
| 2023-08-23T18:31:14Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-08-22T09:37:49Z |
---
license: openrail
---
Test RVC2 models on the GFL character KACPDW, via various hyperparams and datasets.
# kacpdw-test-1/1a/1b (~07/2023)
* Trained on dataset of ~30 items, dialogue from game
* Trained for ~100/150/50 epochs
* First attempts
# kacpdw-test-2, various (08/23/2023)
* Trained on dataset of ~30 items, dialogue from game
* Second attempts (45 epochs/495 steps seems to be best)
|
RajuEEE/RewardModelForQuestionAnswering_LLama2_RevisedData
|
RajuEEE
| 2023-08-23T18:27:58Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-23T18:27:54Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
am-infoweb/QA_SYNTH_DATA_WITH_UNANSWERABLE_23_AUG_xlm_FNETUNE_1.0
|
am-infoweb
| 2023-08-23T18:24:35Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"base_model:am-infoweb/QA_SYNTH_DATA_WITH_UNANSWERABLE_23_AUG_xlm_roberta-base",
"base_model:finetune:am-infoweb/QA_SYNTH_DATA_WITH_UNANSWERABLE_23_AUG_xlm_roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-23T18:15:10Z |
---
license: mit
base_model: am-infoweb/QA_SYNTH_DATA_WITH_UNANSWERABLE_23_AUG_xlm_roberta-base
tags:
- generated_from_trainer
model-index:
- name: QA_SYNTH_DATA_WITH_UNANSWERABLE_23_AUG_xlm_FNETUNE_1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# QA_SYNTH_DATA_WITH_UNANSWERABLE_23_AUG_xlm_FNETUNE_1.0
This model is a fine-tuned version of [am-infoweb/QA_SYNTH_DATA_WITH_UNANSWERABLE_23_AUG_xlm_roberta-base](https://huggingface.co/am-infoweb/QA_SYNTH_DATA_WITH_UNANSWERABLE_23_AUG_xlm_roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 188 | 0.0000 |
| No log | 2.0 | 376 | 0.0000 |
| 0.2453 | 3.0 | 564 | 0.0000 |
| 0.2453 | 4.0 | 752 | 0.0000 |
| 0.2453 | 5.0 | 940 | 0.0000 |
| 0.0002 | 6.0 | 1128 | 0.0000 |
| 0.0002 | 7.0 | 1316 | 0.0000 |
| 0.0 | 8.0 | 1504 | 0.0000 |
| 0.0 | 9.0 | 1692 | 0.0000 |
| 0.0 | 10.0 | 1880 | 0.0000 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
vuminhtue/bert-finetuned-ner
|
vuminhtue
| 2023-08-23T18:23:24Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-23T18:17:21Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: vuminhtue/bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vuminhtue/bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0279
- Validation Loss: 0.0510
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1780 | 0.0630 | 0 |
| 0.0487 | 0.0506 | 1 |
| 0.0279 | 0.0510 | 2 |
### Framework versions
- Transformers 4.32.0
- TensorFlow 2.9.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
asenella/ms_config_1_alpha_90_beta_50_seed_1
|
asenella
| 2023-08-23T18:20:16Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-08-23T18:20:14Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
May-Si/MyFineAlpaca
|
May-Si
| 2023-08-23T18:16:47Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-23T18:11:37Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
JiemingYou/Reinforce-CartPole-v1-policy
|
JiemingYou
| 2023-08-23T18:10:38Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-23T17:46:51Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1-policy
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
HeinrichWirth/whisper-tiny_en
|
HeinrichWirth
| 2023-08-23T17:48:44Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-23T15:16:40Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny_en
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.36304909560723514
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny_en
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6349
- Wer Ortho: 0.3964
- Wer: 0.3630
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 3.8643 | 1.79 | 50 | 3.5786 | 0.5114 | 0.3714 |
| 2.4042 | 3.57 | 100 | 2.3266 | 0.4657 | 0.3689 |
| 1.4319 | 5.36 | 150 | 1.3619 | 0.4367 | 0.3702 |
| 0.7558 | 7.14 | 200 | 0.7935 | 0.4213 | 0.3721 |
| 0.524 | 8.93 | 250 | 0.6820 | 0.4078 | 0.3721 |
| 0.4702 | 10.71 | 300 | 0.6349 | 0.3964 | 0.3630 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
dkqjrm/20230824002458
|
dkqjrm
| 2023-08-23T17:36:02Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-23T15:25:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230824002458'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230824002458
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0768
- Accuracy: 0.7112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4042 | 1.0 | 623 | 0.3862 | 0.5271 |
| 0.3203 | 2.0 | 1246 | 1.0958 | 0.4729 |
| 0.3087 | 3.0 | 1869 | 0.5979 | 0.4729 |
| 0.2723 | 4.0 | 2492 | 0.1618 | 0.5271 |
| 0.2635 | 5.0 | 3115 | 0.2704 | 0.5343 |
| 0.2826 | 6.0 | 3738 | 0.3245 | 0.4729 |
| 0.2663 | 7.0 | 4361 | 0.2230 | 0.5957 |
| 0.2562 | 8.0 | 4984 | 0.1453 | 0.6390 |
| 0.2259 | 9.0 | 5607 | 0.1312 | 0.6282 |
| 0.1806 | 10.0 | 6230 | 0.1118 | 0.7148 |
| 0.1525 | 11.0 | 6853 | 0.1076 | 0.6787 |
| 0.1509 | 12.0 | 7476 | 0.1241 | 0.6643 |
| 0.149 | 13.0 | 8099 | 0.1158 | 0.6931 |
| 0.1509 | 14.0 | 8722 | 0.1154 | 0.7040 |
| 0.1397 | 15.0 | 9345 | 0.1096 | 0.6823 |
| 0.1311 | 16.0 | 9968 | 0.0999 | 0.6751 |
| 0.13 | 17.0 | 10591 | 0.0986 | 0.6968 |
| 0.1244 | 18.0 | 11214 | 0.1063 | 0.6895 |
| 0.1278 | 19.0 | 11837 | 0.1229 | 0.6931 |
| 0.1228 | 20.0 | 12460 | 0.0905 | 0.7112 |
| 0.1153 | 21.0 | 13083 | 0.0916 | 0.7004 |
| 0.1171 | 22.0 | 13706 | 0.1085 | 0.7148 |
| 0.1179 | 23.0 | 14329 | 0.1101 | 0.7256 |
| 0.1069 | 24.0 | 14952 | 0.0917 | 0.6895 |
| 0.1019 | 25.0 | 15575 | 0.0837 | 0.7112 |
| 0.1017 | 26.0 | 16198 | 0.0832 | 0.7148 |
| 0.1034 | 27.0 | 16821 | 0.0847 | 0.7220 |
| 0.0989 | 28.0 | 17444 | 0.0830 | 0.7256 |
| 0.0969 | 29.0 | 18067 | 0.0817 | 0.7148 |
| 0.0964 | 30.0 | 18690 | 0.0835 | 0.7112 |
| 0.0957 | 31.0 | 19313 | 0.0846 | 0.7148 |
| 0.0937 | 32.0 | 19936 | 0.0827 | 0.7112 |
| 0.0895 | 33.0 | 20559 | 0.0860 | 0.7220 |
| 0.0905 | 34.0 | 21182 | 0.0830 | 0.7220 |
| 0.0875 | 35.0 | 21805 | 0.0796 | 0.7184 |
| 0.0895 | 36.0 | 22428 | 0.0811 | 0.7076 |
| 0.0861 | 37.0 | 23051 | 0.0805 | 0.7112 |
| 0.0868 | 38.0 | 23674 | 0.0786 | 0.7040 |
| 0.0798 | 39.0 | 24297 | 0.0787 | 0.7148 |
| 0.0827 | 40.0 | 24920 | 0.0815 | 0.7112 |
| 0.0798 | 41.0 | 25543 | 0.0790 | 0.7184 |
| 0.079 | 42.0 | 26166 | 0.0813 | 0.7220 |
| 0.0794 | 43.0 | 26789 | 0.0802 | 0.7112 |
| 0.0766 | 44.0 | 27412 | 0.0796 | 0.7076 |
| 0.0766 | 45.0 | 28035 | 0.0813 | 0.7329 |
| 0.0765 | 46.0 | 28658 | 0.0810 | 0.7112 |
| 0.0744 | 47.0 | 29281 | 0.0781 | 0.7148 |
| 0.076 | 48.0 | 29904 | 0.0794 | 0.7148 |
| 0.0728 | 49.0 | 30527 | 0.0780 | 0.7112 |
| 0.0745 | 50.0 | 31150 | 0.0767 | 0.7256 |
| 0.0711 | 51.0 | 31773 | 0.0771 | 0.7220 |
| 0.0726 | 52.0 | 32396 | 0.0772 | 0.7256 |
| 0.0747 | 53.0 | 33019 | 0.0772 | 0.7184 |
| 0.0711 | 54.0 | 33642 | 0.0772 | 0.7256 |
| 0.0676 | 55.0 | 34265 | 0.0767 | 0.7329 |
| 0.0697 | 56.0 | 34888 | 0.0783 | 0.7220 |
| 0.0692 | 57.0 | 35511 | 0.0766 | 0.7184 |
| 0.067 | 58.0 | 36134 | 0.0773 | 0.7148 |
| 0.0676 | 59.0 | 36757 | 0.0774 | 0.7112 |
| 0.0678 | 60.0 | 37380 | 0.0768 | 0.7112 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dkqjrm/20230824002455
|
dkqjrm
| 2023-08-23T17:35:04Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-23T15:25:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230824002455'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230824002455
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7440
- Accuracy: 0.7473
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0306 | 1.0 | 623 | 0.6949 | 0.4729 |
| 0.8552 | 2.0 | 1246 | 0.7454 | 0.5596 |
| 0.9623 | 3.0 | 1869 | 0.8165 | 0.4874 |
| 0.8291 | 4.0 | 2492 | 1.1894 | 0.5704 |
| 0.8201 | 5.0 | 3115 | 0.6677 | 0.6823 |
| 0.8297 | 6.0 | 3738 | 0.6379 | 0.7256 |
| 0.7792 | 7.0 | 4361 | 0.6572 | 0.6931 |
| 0.6925 | 8.0 | 4984 | 0.6975 | 0.6498 |
| 0.7243 | 9.0 | 5607 | 0.7871 | 0.6679 |
| 0.69 | 10.0 | 6230 | 0.7707 | 0.7148 |
| 0.6492 | 11.0 | 6853 | 0.7202 | 0.7004 |
| 0.6448 | 12.0 | 7476 | 0.6862 | 0.7329 |
| 0.6571 | 13.0 | 8099 | 0.6079 | 0.7256 |
| 0.6558 | 14.0 | 8722 | 0.8183 | 0.7329 |
| 0.5996 | 15.0 | 9345 | 0.5783 | 0.7256 |
| 0.5494 | 16.0 | 9968 | 0.5463 | 0.7473 |
| 0.4964 | 17.0 | 10591 | 0.7906 | 0.7040 |
| 0.4914 | 18.0 | 11214 | 0.5334 | 0.7220 |
| 0.4933 | 19.0 | 11837 | 0.6681 | 0.7329 |
| 0.4655 | 20.0 | 12460 | 0.8837 | 0.7401 |
| 0.4432 | 21.0 | 13083 | 0.7407 | 0.7473 |
| 0.4051 | 22.0 | 13706 | 0.7213 | 0.7509 |
| 0.4018 | 23.0 | 14329 | 0.8420 | 0.7365 |
| 0.3745 | 24.0 | 14952 | 0.6421 | 0.7365 |
| 0.3558 | 25.0 | 15575 | 0.5727 | 0.7437 |
| 0.3325 | 26.0 | 16198 | 0.6941 | 0.7545 |
| 0.3471 | 27.0 | 16821 | 0.8213 | 0.7545 |
| 0.3405 | 28.0 | 17444 | 0.7249 | 0.7292 |
| 0.3079 | 29.0 | 18067 | 0.5829 | 0.7545 |
| 0.3136 | 30.0 | 18690 | 0.7057 | 0.7617 |
| 0.3152 | 31.0 | 19313 | 0.7746 | 0.7509 |
| 0.2989 | 32.0 | 19936 | 0.6028 | 0.7617 |
| 0.2657 | 33.0 | 20559 | 0.8212 | 0.7509 |
| 0.2703 | 34.0 | 21182 | 0.7015 | 0.7401 |
| 0.2562 | 35.0 | 21805 | 0.5706 | 0.7581 |
| 0.2738 | 36.0 | 22428 | 0.7036 | 0.7690 |
| 0.2404 | 37.0 | 23051 | 0.6888 | 0.7545 |
| 0.2595 | 38.0 | 23674 | 0.7086 | 0.7437 |
| 0.245 | 39.0 | 24297 | 0.7283 | 0.7401 |
| 0.2279 | 40.0 | 24920 | 0.7231 | 0.7401 |
| 0.2288 | 41.0 | 25543 | 0.6915 | 0.7365 |
| 0.2166 | 42.0 | 26166 | 0.8110 | 0.7329 |
| 0.219 | 43.0 | 26789 | 0.7984 | 0.7437 |
| 0.1935 | 44.0 | 27412 | 0.8829 | 0.7401 |
| 0.2105 | 45.0 | 28035 | 0.7270 | 0.7545 |
| 0.2079 | 46.0 | 28658 | 0.8026 | 0.7365 |
| 0.1859 | 47.0 | 29281 | 0.6536 | 0.7617 |
| 0.2211 | 48.0 | 29904 | 0.7410 | 0.7401 |
| 0.1862 | 49.0 | 30527 | 0.8433 | 0.7401 |
| 0.2015 | 50.0 | 31150 | 0.6761 | 0.7437 |
| 0.1921 | 51.0 | 31773 | 0.7471 | 0.7545 |
| 0.1899 | 52.0 | 32396 | 0.8135 | 0.7437 |
| 0.188 | 53.0 | 33019 | 0.7556 | 0.7365 |
| 0.1771 | 54.0 | 33642 | 0.7566 | 0.7365 |
| 0.1697 | 55.0 | 34265 | 0.7515 | 0.7509 |
| 0.185 | 56.0 | 34888 | 0.7795 | 0.7437 |
| 0.177 | 57.0 | 35511 | 0.7455 | 0.7509 |
| 0.1663 | 58.0 | 36134 | 0.7345 | 0.7509 |
| 0.1722 | 59.0 | 36757 | 0.7430 | 0.7509 |
| 0.1696 | 60.0 | 37380 | 0.7440 | 0.7473 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Frorozcol/llama_7b_recetas
|
Frorozcol
| 2023-08-23T17:24:10Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-23T15:04:03Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
ardt-multipart/ardt-multipart-robust_train_walker2d_v3-2308_1656-66
|
ardt-multipart
| 2023-08-23T17:23:09Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-23T15:57:56Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-multipart-robust_train_walker2d_v3-2308_1656-66
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-multipart-robust_train_walker2d_v3-2308_1656-66
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
agarc15/mega-base-wikitext-finetuned-INCIBE
|
agarc15
| 2023-08-23T17:09:23Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mega",
"text-classification",
"generated_from_trainer",
"base_model:mnaylor/mega-base-wikitext",
"base_model:finetune:mnaylor/mega-base-wikitext",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-23T08:16:25Z |
---
license: apache-2.0
base_model: mnaylor/mega-base-wikitext
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mega-base-wikitext-finetuned-INCIBE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mega-base-wikitext-finetuned-INCIBE
This model is a fine-tuned version of [mnaylor/mega-base-wikitext](https://huggingface.co/mnaylor/mega-base-wikitext) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5169
- Accuracy: 0.3850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 196 | 1.5359 | 0.3742 |
| No log | 2.0 | 392 | 1.5257 | 0.3753 |
| 1.5035 | 3.0 | 588 | 1.5320 | 0.3719 |
| 1.5035 | 4.0 | 784 | 1.5231 | 0.3715 |
| 1.5035 | 5.0 | 980 | 1.5203 | 0.3745 |
| 1.4755 | 6.0 | 1176 | 1.5217 | 0.3742 |
| 1.4755 | 7.0 | 1372 | 1.5301 | 0.3719 |
| 1.4531 | 8.0 | 1568 | 1.5131 | 0.3805 |
| 1.4531 | 9.0 | 1764 | 1.5212 | 0.3783 |
| 1.4531 | 10.0 | 1960 | 1.5173 | 0.3771 |
| 1.4426 | 11.0 | 2156 | 1.5190 | 0.3809 |
| 1.4426 | 12.0 | 2352 | 1.5122 | 0.3794 |
| 1.4238 | 13.0 | 2548 | 1.5129 | 0.3794 |
| 1.4238 | 14.0 | 2744 | 1.5176 | 0.3783 |
| 1.4238 | 15.0 | 2940 | 1.5139 | 0.3783 |
| 1.4161 | 16.0 | 3136 | 1.5235 | 0.3805 |
| 1.4161 | 17.0 | 3332 | 1.5125 | 0.3846 |
| 1.4115 | 18.0 | 3528 | 1.5171 | 0.3827 |
| 1.4115 | 19.0 | 3724 | 1.5112 | 0.3827 |
| 1.4115 | 20.0 | 3920 | 1.5123 | 0.3816 |
| 1.4052 | 21.0 | 4116 | 1.5126 | 0.3827 |
| 1.4052 | 22.0 | 4312 | 1.5170 | 0.3850 |
| 1.4004 | 23.0 | 4508 | 1.5135 | 0.3805 |
| 1.4004 | 24.0 | 4704 | 1.5157 | 0.3809 |
| 1.4004 | 25.0 | 4900 | 1.5160 | 0.3824 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
alexandre-co/ppo-Huggy
|
alexandre-co
| 2023-08-23T17:07:59Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-23T17:07:53Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: alexandre-co/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
akashmaggon/distilbert-base-uncased-finetuned-imdb
|
akashmaggon
| 2023-08-23T17:00:58Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-08-23T14:15:29Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4140
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6959 | 1.0 | 157 | 2.5440 |
| 2.5692 | 2.0 | 314 | 2.4636 |
| 2.5434 | 3.0 | 471 | 2.4249 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
rossevine/Model_S_D_Wav2Vec2
|
rossevine
| 2023-08-23T16:55:13Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-21T07:33:43Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Model_S_D_Wav2Vec2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Model_S_D_Wav2Vec2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0464
- Wer: 0.2319
- Cer: 0.0598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 3.5768 | 0.85 | 400 | 0.6152 | 0.5812 | 0.1905 |
| 0.3226 | 1.71 | 800 | 0.1026 | 0.3195 | 0.0722 |
| 0.1827 | 2.56 | 1200 | 0.0725 | 0.2048 | 0.0454 |
| 0.129 | 3.41 | 1600 | 0.0671 | 0.2393 | 0.0525 |
| 0.1075 | 4.26 | 2000 | 0.0556 | 0.2312 | 0.0497 |
| 0.0924 | 5.12 | 2400 | 0.0572 | 0.2040 | 0.0478 |
| 0.076 | 5.97 | 2800 | 0.0596 | 0.1472 | 0.0346 |
| 0.0695 | 6.82 | 3200 | 0.0608 | 0.2274 | 0.0510 |
| 0.0707 | 7.68 | 3600 | 0.0490 | 0.2665 | 0.0660 |
| 0.0597 | 8.53 | 4000 | 0.0509 | 0.2442 | 0.0593 |
| 0.0557 | 9.38 | 4400 | 0.0501 | 0.2533 | 0.0610 |
| 0.0503 | 10.23 | 4800 | 0.0519 | 0.2534 | 0.0622 |
| 0.0471 | 11.09 | 5200 | 0.0512 | 0.2585 | 0.0638 |
| 0.0417 | 11.94 | 5600 | 0.0497 | 0.2522 | 0.0610 |
| 0.0415 | 12.79 | 6000 | 0.0508 | 0.2547 | 0.0629 |
| 0.0372 | 13.65 | 6400 | 0.0497 | 0.2580 | 0.0643 |
| 0.0364 | 14.5 | 6800 | 0.0448 | 0.2498 | 0.0600 |
| 0.034 | 15.35 | 7200 | 0.0522 | 0.2419 | 0.0593 |
| 0.0306 | 16.2 | 7600 | 0.0510 | 0.2433 | 0.0560 |
| 0.0345 | 17.06 | 8000 | 0.0503 | 0.2610 | 0.0657 |
| 0.0266 | 17.91 | 8400 | 0.0462 | 0.2434 | 0.0620 |
| 0.0273 | 18.76 | 8800 | 0.0507 | 0.2456 | 0.0622 |
| 0.0216 | 19.62 | 9200 | 0.0466 | 0.2214 | 0.0531 |
| 0.0208 | 20.47 | 9600 | 0.0497 | 0.2396 | 0.0598 |
| 0.0201 | 21.32 | 10000 | 0.0470 | 0.2332 | 0.0559 |
| 0.0174 | 22.17 | 10400 | 0.0418 | 0.2346 | 0.0590 |
| 0.0198 | 23.03 | 10800 | 0.0472 | 0.2386 | 0.0602 |
| 0.0149 | 23.88 | 11200 | 0.0490 | 0.2446 | 0.0638 |
| 0.0133 | 24.73 | 11600 | 0.0497 | 0.2430 | 0.0632 |
| 0.0118 | 25.59 | 12000 | 0.0498 | 0.2368 | 0.0620 |
| 0.0106 | 26.44 | 12400 | 0.0453 | 0.2309 | 0.0590 |
| 0.0104 | 27.29 | 12800 | 0.0452 | 0.2296 | 0.0583 |
| 0.0085 | 28.14 | 13200 | 0.0467 | 0.2352 | 0.0604 |
| 0.0081 | 29.0 | 13600 | 0.0470 | 0.2310 | 0.0592 |
| 0.0079 | 29.85 | 14000 | 0.0464 | 0.2319 | 0.0598 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 1.18.3
- Tokenizers 0.13.3
|
kadir0/my_awesome_model
|
kadir0
| 2023-08-23T16:53:17Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-23T13:37:14Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93152
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2223
- Accuracy: 0.9315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2261 | 1.0 | 1563 | 0.2259 | 0.9174 |
| 0.154 | 2.0 | 3126 | 0.2223 | 0.9315 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
reginaboateng/preffier_BERT_adapter_ner_pico_for_classification_task
|
reginaboateng
| 2023-08-23T16:40:37Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"bert",
"adapterhub:pico_ner",
"dataset:reginaboateng/cleaned_ebmnlp_pico",
"region:us"
] | null | 2023-08-23T16:40:33Z |
---
tags:
- bert
- adapter-transformers
- adapterhub:pico_ner
datasets:
- reginaboateng/cleaned_ebmnlp_pico
---
# Adapter `reginaboateng/preffier_BERT_adapter_ner_pico_for_classification_task` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [pico_ner](https://adapterhub.ml/explore/pico_ner/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("reginaboateng/preffier_BERT_adapter_ner_pico_for_classification_task", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
nagupv/llama30B_contextLLMExam_18kv2_f0
|
nagupv
| 2023-08-23T16:39:42Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-23T13:28:30Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
vargr/yt-grader-model
|
vargr
| 2023-08-23T16:36:20Z | 238 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-23T16:35:42Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: yt-grader-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yt-grader-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the yt-thumbnail-dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4270
- Accuracy: 0.8431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4166 | 1.0 | 442 | 0.4169 | 0.8079 |
| 0.2478 | 2.0 | 884 | 0.3685 | 0.8395 |
| 0.1407 | 3.0 | 1326 | 0.4270 | 0.8431 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
AnnaMats/ppo-Pyramids-Training
|
AnnaMats
| 2023-08-23T16:32:17Z | 21 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-08-22T09:21:12Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: AnnaMats/ppo-Pyramids-Training
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
karimasbar/resultss
|
karimasbar
| 2023-08-23T16:27:16Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-08-23T16:26:58Z |
---
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: resultss
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resultss
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 5000
### Training results
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
zarakiquemparte/zaraxe-l2-7b
|
zarakiquemparte
| 2023-08-23T16:22:35Z | 1,474 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-23T15:38:10Z |
---
license: other
tags:
- llama2
---
# Model Card: ZaraXE L2 7b
This model uses [Zarafusionex L2 7b without LimaRP](https://huggingface.co/zarakiquemparte/zarafusionex-l2-7b) (71%) as a base with [Airoboros L2 7B GPT4 2.0](https://huggingface.co/jondurbin/airoboros-l2-7b-gpt4-2.0) (29%) and the result of this merge was merged with [LimaRP LLama2 7B Lora](https://huggingface.co/lemonilia/limarp-llama2).
This merge of models(Zarafusionex w/o LimaRP and Airoboros) was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/merge-cli.py)
This merge of Lora with Model was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/apply-lora.py)
Merge illustration:

## Usage:
Since this is a merge between Zarafusionex, Airoboros and LimaRP, the following instruction formats should work:
Alpaca 2:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
LimaRP instruction format:
```
<<SYSTEM>>
<character card and system prompt>
<<USER>>
<prompt>
<<AIBOT>>
<leave a newline blank for model to respond>
```
## Bias, Risks, and Limitations
This model is not intended for supplying factual information or advice in any form
## Training Details
This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details.
|
asenella/ms_config_1_alpha_90_beta_50_seed_0
|
asenella
| 2023-08-23T16:19:14Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-08-23T16:19:12Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Stepa/sd-class-butterflies-128-first-unit
|
Stepa
| 2023-08-23T16:17:47Z | 44 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-08-23T16:17:37Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Stepa/sd-class-butterflies-128-first-unit')
image = pipeline().images[0]
image
```
|
arnavagrawal/BLOOMZ
|
arnavagrawal
| 2023-08-23T16:12:25Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-23T16:12:20Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
yasmineelabbar/marian-finetuned-kde4-en-to-fr
|
yasmineelabbar
| 2023-08-23T16:00:05Z | 103 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-08-21T11:34:21Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.88529894542656
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8556
- Bleu: 52.8853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
daochf/Lora-Opt6_7b-PuceDS-v03x50
|
daochf
| 2023-08-23T15:57:35Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-23T15:56:09Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
usvsnsp/pythia-6.9b-sft
|
usvsnsp
| 2023-08-23T15:53:41Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-08T17:42:17Z |
wandb run: https://wandb.ai/usvsnsp/trlx/runs/llxa7qkl
Model evals:
| Task |Version|Filter| Metric |Value | |Stderr|
|-------------|-------|------|--------|-----:|---|-----:|
|arc_challenge|Yaml |none |acc |0.3387|± |0.0138|
| | |none |acc_norm|0.3532|± |0.0140|
|arc_easy |Yaml |none |acc |0.6936|± |0.0095|
| | |none |acc_norm|0.6187|± |0.0100|
|logiqa |Yaml |none |acc |0.2335|± |0.0166|
| | |none |acc_norm|0.2734|± |0.0175|
|piqa |Yaml |none |acc |0.7535|± |0.0101|
| | |none |acc_norm|0.7693|± |0.0098|
|sciq |Yaml |none |acc |0.9020|± |0.0094|
| | |none |acc_norm|0.8320|± |0.0118|
|winogrande |Yaml |none |acc |0.6267|± |0.0136|
|
abhijithyess/chandrayaan-LunarLander
|
abhijithyess
| 2023-08-23T15:52:36Z | 0 | 1 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-23T15:51:06Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.74 +/- 19.12
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yinde/en-ha
|
yinde
| 2023-08-23T15:44:47Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-08-23T14:30:34Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: saad-finetuned-NLP-opus-mt-en-ha
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# saad-finetuned-NLP-opus-mt-en-ha
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ha](https://huggingface.co/Helsinki-NLP/opus-mt-en-ha) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5787
- Bleu: 68.0524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
julien-c/bert-base-uncased-duplicate
|
julien-c
| 2023-08-23T15:42:41Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"coreml",
"onnx",
"safetensors",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-08-23T15:42:41Z |
---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
duplicated_from: bert-base-uncased
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Model variations
BERT has originally been released in base and large variations, for cased and uncased input text. The uncased models also strips out an accent markers.
Chinese and multilingual uncased and cased versions followed shortly after.
Modified preprocessing with whole word masking has replaced subpiece masking in a following work, with the release of two models.
Other 24 smaller models are released afterward.
The detailed release history can be found on the [google-research/bert readme](https://github.com/google-research/bert/blob/master/README.md) on github.
| Model | #params | Language |
|------------------------|--------------------------------|-------|
| [`bert-base-uncased`](https://huggingface.co/bert-base-uncased) | 110M | English |
| [`bert-large-uncased`](https://huggingface.co/bert-large-uncased) | 340M | English | sub
| [`bert-base-cased`](https://huggingface.co/bert-base-cased) | 110M | English |
| [`bert-large-cased`](https://huggingface.co/bert-large-cased) | 340M | English |
| [`bert-base-chinese`](https://huggingface.co/bert-base-chinese) | 110M | Chinese |
| [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) | 110M | Multiple |
| [`bert-large-uncased-whole-word-masking`](https://huggingface.co/bert-large-uncased-whole-word-masking) | 340M | English |
| [`bert-large-cased-whole-word-masking`](https://huggingface.co/bert-large-cased-whole-word-masking) | 340M | English |
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions of a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.1073106899857521,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.08774490654468536,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a new model. [SEP]",
'score': 0.05338378623127937,
'token': 2047,
'token_str': 'new'},
{'sequence': "[CLS] hello i'm a super model. [SEP]",
'score': 0.04667217284440994,
'token': 3565,
'token_str': 'super'},
{'sequence': "[CLS] hello i'm a fine model. [SEP]",
'score': 0.027095865458250046,
'token': 2986,
'token_str': 'fine'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("The man worked as a [MASK].")
[{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
'score': 0.09747550636529922,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the man worked as a waiter. [SEP]',
'score': 0.0523831807076931,
'token': 15610,
'token_str': 'waiter'},
{'sequence': '[CLS] the man worked as a barber. [SEP]',
'score': 0.04962705448269844,
'token': 13362,
'token_str': 'barber'},
{'sequence': '[CLS] the man worked as a mechanic. [SEP]',
'score': 0.03788609802722931,
'token': 15893,
'token_str': 'mechanic'},
{'sequence': '[CLS] the man worked as a salesman. [SEP]',
'score': 0.037680890411138535,
'token': 18968,
'token_str': 'salesman'}]
>>> unmasker("The woman worked as a [MASK].")
[{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
'score': 0.21981462836265564,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the woman worked as a waitress. [SEP]',
'score': 0.1597415804862976,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the woman worked as a maid. [SEP]',
'score': 0.1154729500412941,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the woman worked as a prostitute. [SEP]',
'score': 0.037968918681144714,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the woman worked as a cook. [SEP]',
'score': 0.03042375110089779,
'token': 5660,
'token_str': 'cook'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus, and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
Sarthak04/bloom_train_v1
|
Sarthak04
| 2023-08-23T15:41:50Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-23T15:41:44Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
reginaboateng/pferrier_umls_relational_extraction_adapter_BERT
|
reginaboateng
| 2023-08-23T15:39:54Z | 1 | 1 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:umls",
"bert",
"dataset:umls",
"region:us"
] | null | 2023-08-23T15:39:51Z |
---
tags:
- adapterhub:umls
- adapter-transformers
- bert
datasets:
- umls
---
# Adapter `reginaboateng/pferrier_umls_relational_extraction_adapter_BERT` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [umls](https://adapterhub.ml/explore/umls/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("reginaboateng/pferrier_umls_relational_extraction_adapter_BERT", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
reginaboateng/compacter_umls_relational_extraction_adapter_BERT
|
reginaboateng
| 2023-08-23T15:39:46Z | 0 | 1 |
adapter-transformers
|
[
"adapter-transformers",
"bert",
"adapterhub:umls",
"dataset:umls",
"region:us"
] | null | 2023-08-23T15:39:43Z |
---
tags:
- bert
- adapterhub:umls
- adapter-transformers
datasets:
- umls
---
# Adapter `reginaboateng/compacter_umls_relational_extraction_adapter_BERT` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [umls](https://adapterhub.ml/explore/umls/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("reginaboateng/compacter_umls_relational_extraction_adapter_BERT", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
ThuyNT03/xlm-roberta-base-VietNam-aug_replace_vi
|
ThuyNT03
| 2023-08-23T15:33:44Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-23T15:29:54Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-VietNam-aug_replace_vi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-VietNam-aug_replace_vi
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4570
- Accuracy: 0.81
- F1: 0.7899
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.996 | 1.0 | 76 | 0.8824 | 0.58 | 0.4258 |
| 0.8331 | 2.0 | 152 | 0.6596 | 0.8 | 0.7466 |
| 0.6019 | 3.0 | 228 | 0.6321 | 0.8 | 0.7465 |
| 0.4534 | 4.0 | 304 | 0.4265 | 0.82 | 0.8108 |
| 0.3936 | 5.0 | 380 | 0.4570 | 0.81 | 0.7899 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
shuvom/llama-midjourney-FT
|
shuvom
| 2023-08-23T15:11:45Z | 1 | 0 |
transformers
|
[
"transformers",
"art",
"text2text-generation",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-29T03:54:25Z |
---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- art
pipeline_tag: text2text-generation
---
|
daochf/Lora-Opt2_7b-PuceDS-v03x50
|
daochf
| 2023-08-23T15:06:59Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-23T15:06:57Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
eliept1/deepqn-SpaceInvadersNoFrameskip-v4
|
eliept1
| 2023-08-23T14:59:56Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-23T14:58:06Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 757.00 +/- 223.06
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga eliept1 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga eliept1 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga eliept1
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
pig4431/xlm-roberta-HeQ-v1
|
pig4431
| 2023-08-23T14:59:47Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-23T13:00:11Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-HeQ-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-HeQ-v1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5099
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 425 | 1.3360 |
| 1.0299 | 2.0 | 850 | 1.3235 |
| 0.8198 | 3.0 | 1275 | 1.3101 |
| 0.7801 | 4.0 | 1700 | 1.3679 |
| 0.6767 | 5.0 | 2125 | 1.4158 |
| 0.5853 | 6.0 | 2550 | 1.4657 |
| 0.5853 | 7.0 | 2975 | 1.5099 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Hamzaabbas77/distilbert-base-uncased-finetuned-cola
|
Hamzaabbas77
| 2023-08-23T14:55:44Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-22T13:34:54Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Hamzaabbas77/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hamzaabbas77/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6757
- Validation Loss: 0.6809
- Train Matthews Correlation: 0.0
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 324, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.6832 | 0.6809 | 0.0 | 0 |
| 0.6758 | 0.6809 | 0.0 | 1 |
| 0.6757 | 0.6809 | 0.0 | 2 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.13.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
larabe/test
|
larabe
| 2023-08-23T14:44:25Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-08-23T10:09:21Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.0
- Tokenizers 0.13.3
|
Wishwa98/ASRForCommonVoice
|
Wishwa98
| 2023-08-23T14:41:55Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:DTU54DL/common-accent",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-22T21:40:17Z |
---
language:
- en
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- DTU54DL/common-accent
metrics:
- wer
model-index:
- name: Whisper Small for Common Accent
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Accent
type: DTU54DL/common-accent
metrics:
- name: Wer
type: wer
value: 13.060479666319083
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small for Common Accent
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Accent dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4234
- Wer Ortho: 17.9229
- Wer: 13.0605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1012 | 1.14 | 500 | 0.3215 | 16.3784 | 11.5941 |
| 0.0345 | 2.28 | 1000 | 0.3483 | 16.6496 | 11.8450 |
| 0.018 | 3.42 | 1500 | 0.3829 | 17.1622 | 12.4707 |
| 0.0075 | 4.57 | 2000 | 0.4069 | 17.8667 | 13.0116 |
| 0.0059 | 5.71 | 2500 | 0.4234 | 17.9229 | 13.0605 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
dkqjrm/20230823213639
|
dkqjrm
| 2023-08-23T14:27:39Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-23T12:36:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230823213639'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230823213639
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3551
- Accuracy: 0.7545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 312 | 1.1031 | 0.5307 |
| 0.9187 | 2.0 | 624 | 0.7935 | 0.4874 |
| 0.9187 | 3.0 | 936 | 0.7082 | 0.5704 |
| 0.8508 | 4.0 | 1248 | 0.6713 | 0.6065 |
| 0.8272 | 5.0 | 1560 | 0.6997 | 0.6390 |
| 0.8272 | 6.0 | 1872 | 0.8815 | 0.6426 |
| 0.722 | 7.0 | 2184 | 1.0092 | 0.6318 |
| 0.722 | 8.0 | 2496 | 0.7370 | 0.6751 |
| 0.7377 | 9.0 | 2808 | 0.6362 | 0.7076 |
| 0.6952 | 10.0 | 3120 | 0.9842 | 0.6570 |
| 0.6952 | 11.0 | 3432 | 0.7133 | 0.7040 |
| 0.672 | 12.0 | 3744 | 0.7288 | 0.6823 |
| 0.6344 | 13.0 | 4056 | 0.7260 | 0.7220 |
| 0.6344 | 14.0 | 4368 | 0.6437 | 0.7112 |
| 0.6039 | 15.0 | 4680 | 0.7529 | 0.7184 |
| 0.6039 | 16.0 | 4992 | 1.0284 | 0.6787 |
| 0.5952 | 17.0 | 5304 | 0.8757 | 0.7256 |
| 0.5371 | 18.0 | 5616 | 0.6932 | 0.7329 |
| 0.5371 | 19.0 | 5928 | 0.7127 | 0.7148 |
| 0.5411 | 20.0 | 6240 | 1.0835 | 0.6823 |
| 0.4985 | 21.0 | 6552 | 0.9109 | 0.7292 |
| 0.4985 | 22.0 | 6864 | 1.4054 | 0.6643 |
| 0.4897 | 23.0 | 7176 | 1.0748 | 0.7112 |
| 0.4897 | 24.0 | 7488 | 1.1041 | 0.7256 |
| 0.4498 | 25.0 | 7800 | 1.0205 | 0.7040 |
| 0.4208 | 26.0 | 8112 | 1.0637 | 0.7148 |
| 0.4208 | 27.0 | 8424 | 0.8231 | 0.7329 |
| 0.4024 | 28.0 | 8736 | 0.7506 | 0.7401 |
| 0.4083 | 29.0 | 9048 | 1.1923 | 0.7184 |
| 0.4083 | 30.0 | 9360 | 1.2166 | 0.7184 |
| 0.3497 | 31.0 | 9672 | 1.2273 | 0.7220 |
| 0.3497 | 32.0 | 9984 | 0.9219 | 0.7437 |
| 0.3188 | 33.0 | 10296 | 1.1009 | 0.7401 |
| 0.2923 | 34.0 | 10608 | 0.8986 | 0.7545 |
| 0.2923 | 35.0 | 10920 | 1.2732 | 0.7509 |
| 0.2876 | 36.0 | 11232 | 1.0246 | 0.7437 |
| 0.2751 | 37.0 | 11544 | 1.0842 | 0.7545 |
| 0.2751 | 38.0 | 11856 | 1.3797 | 0.7401 |
| 0.2807 | 39.0 | 12168 | 1.2845 | 0.7401 |
| 0.2807 | 40.0 | 12480 | 1.0588 | 0.7473 |
| 0.2524 | 41.0 | 12792 | 1.3290 | 0.7365 |
| 0.2353 | 42.0 | 13104 | 1.1838 | 0.7509 |
| 0.2353 | 43.0 | 13416 | 1.6934 | 0.7292 |
| 0.2221 | 44.0 | 13728 | 1.4884 | 0.7437 |
| 0.222 | 45.0 | 14040 | 1.4472 | 0.7292 |
| 0.222 | 46.0 | 14352 | 1.6685 | 0.7365 |
| 0.2124 | 47.0 | 14664 | 1.2194 | 0.7545 |
| 0.2124 | 48.0 | 14976 | 1.4803 | 0.7437 |
| 0.1923 | 49.0 | 15288 | 1.3954 | 0.7509 |
| 0.1717 | 50.0 | 15600 | 1.4008 | 0.7401 |
| 0.1717 | 51.0 | 15912 | 1.2478 | 0.7545 |
| 0.1775 | 52.0 | 16224 | 1.2562 | 0.7545 |
| 0.1599 | 53.0 | 16536 | 1.4865 | 0.7545 |
| 0.1599 | 54.0 | 16848 | 1.3985 | 0.7473 |
| 0.1518 | 55.0 | 17160 | 1.3492 | 0.7437 |
| 0.1518 | 56.0 | 17472 | 1.3659 | 0.7437 |
| 0.1481 | 57.0 | 17784 | 1.2743 | 0.7545 |
| 0.1461 | 58.0 | 18096 | 1.3666 | 0.7509 |
| 0.1461 | 59.0 | 18408 | 1.3473 | 0.7509 |
| 0.1449 | 60.0 | 18720 | 1.3551 | 0.7545 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ThuyNT03/xlm-roberta-base-VietNam-aug_delete
|
ThuyNT03
| 2023-08-23T14:25:19Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-23T14:19:31Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-VietNam-aug_delete
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-VietNam-aug_delete
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4111
- Accuracy: 0.83
- F1: 0.8197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9299 | 1.0 | 85 | 0.8008 | 0.58 | 0.4258 |
| 0.7524 | 2.0 | 170 | 0.4923 | 0.83 | 0.7846 |
| 0.5724 | 3.0 | 255 | 0.3849 | 0.88 | 0.8600 |
| 0.47 | 4.0 | 340 | 0.4657 | 0.8 | 0.7669 |
| 0.3942 | 5.0 | 425 | 0.4111 | 0.83 | 0.8197 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ardt-multipart/ardt-multipart-arrl_sgld_train_hopper_high-2308_1449-99
|
ardt-multipart
| 2023-08-23T14:23:26Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-23T13:50:46Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-multipart-arrl_sgld_train_hopper_high-2308_1449-99
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-multipart-arrl_sgld_train_hopper_high-2308_1449-99
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ArneJa/ppo-LunarLander-v2
|
ArneJa
| 2023-08-23T14:21:04Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-23T14:20:42Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.51 +/- 23.99
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hanifmz0711/my_awesome_model
|
hanifmz0711
| 2023-08-23T14:19:08Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-23T13:38:00Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: hanifmz0711/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hanifmz0711/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2467
- Validation Loss: 0.1880
- Train Accuracy: 0.9264
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1562, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2467 | 0.1880 | 0.9264 | 0 |
### Framework versions
- Transformers 4.32.0
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
seungkim1313/qa_model
|
seungkim1313
| 2023-08-23T14:15:12Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad_kor_v1",
"base_model:deepset/minilm-uncased-squad2",
"base_model:finetune:deepset/minilm-uncased-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-23T13:52:12Z |
---
license: cc-by-4.0
base_model: deepset/minilm-uncased-squad2
tags:
- generated_from_trainer
datasets:
- squad_kor_v1
model-index:
- name: qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qa_model
This model is a fine-tuned version of [deepset/minilm-uncased-squad2](https://huggingface.co/deepset/minilm-uncased-squad2) on the squad_kor_v1 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2803
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.4482 | 1.0 | 25 | 3.8476 |
| 4.1886 | 2.0 | 50 | 3.3495 |
| 2.8781 | 3.0 | 75 | 3.2032 |
| 3.5417 | 4.0 | 100 | 3.3601 |
| 2.1682 | 5.0 | 125 | 3.2218 |
| 3.1787 | 6.0 | 150 | 3.3264 |
| 2.814 | 7.0 | 175 | 3.3053 |
| 2.7755 | 8.0 | 200 | 3.2801 |
| 1.9859 | 9.0 | 225 | 3.4267 |
| 2.1119 | 10.0 | 250 | 3.2803 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
aehrm/redewiedergabe-reported
|
aehrm
| 2023-08-23T14:12:28Z | 7 | 0 |
flair
|
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"region:us"
] |
token-classification
| 2023-05-16T21:57:22Z |
---
tags:
- flair
- token-classification
- sequence-tagger-model
language: de
---
# REDEWIEDERGABE Tagger: reported STWR
This model is part of an ensemble of binary taggers that recognize German speech, thought and writing representation (STWR), that is being used in [LLpro](https://github.com/cophi-wue/LLpro). They can be used to automatically detect and annotate the following 4 types of speech, thought and writing representation in German texts:
| STWR type | Example | Translation |
|--------------------------------|-------------------------------------------------------------------------|----------------------------------------------------------|
| direct | Dann sagte er: **"Ich habe Hunger."** | Then he said: **"I'm hungry."** |
| free indirect ('erlebte Rede') | Er war ratlos. **Woher sollte er denn hier bloß ein Mittagessen bekommen?** | He was at a loss. **Where should he ever find lunch here?** |
| indirect | Sie fragte, **wo das Essen sei.** | She asked **where the food was.** |
| reported (**this tagger**) | **Sie sprachen über das Mittagessen.** | **They talked about lunch.** |
The ensemble is trained on the [REDEWIEDERGABE corpus](https://github.com/redewiedergabe/corpus) ([Annotation guidelines](http://redewiedergabe.de/richtlinien/richtlinien.html)), fine-tuning each tagger on the domain-adapted [lkonle/fiction-gbert-large](https://huggingface.co/lkonle/fiction-gbert-large). ([Training Code](https://github.com/cophi-wue/LLpro/blob/main/contrib/train_redewiedergabe.py))
**F1-Scores:**
| STWR type | F1-Score |
|-----------|-----------|
| direct | 90.76 |
| indirect | 79.16 |
| free indirect | 58.00 |
| **reported (this tagger)** | **70.47** |
----
**Demo Usage:**
```python
from flair.data import Sentence
from flair.models import SequenceTagger
sentence = Sentence('Sie sprachen über das Mittagessen. Sie fragte, wo das Essen sei. Woher sollte er das wissen? Dann sagte er: "Ich habe Hunger."')
rwtypes = ['direct', 'indirect', 'freeindirect', 'reported']
for rwtype in rwtypes:
model = SequenceTagger.load(f'aehrm/redewiedergabe-{rwtype}')
model.predict(sentence)
print(rwtype, [ x.data_point.text for x in sentence.get_labels() ])
# >>> direct ['"', 'Ich', 'habe', 'Hunger', '.', '"']
# >>> indirect ['wo', 'das', 'Essen', 'sei', '.']
# >>> freeindirect ['Woher', 'sollte', 'er', 'das', 'wissen', '?']
# >>> reported ['Sie', 'sprachen', 'über', 'das', 'Mittagessen', '.', 'Woher', 'sollte', 'er', 'das', 'wissen', '?']
```
**Cite**:
Please cite the following paper when using this model.
```
@inproceedings{ehrmanntraut-et-al-llpro-2023,
address = {Ingolstadt, Germany},
title = {{LLpro}: A Literary Language Processing Pipeline for {German} Narrative Text},
booktitle = {Proceedings of the 10th Conference on Natural Language Processing ({KONVENS} 2022)},
publisher = {{KONVENS} 2023 Organizers},
author = {Ehrmanntraut, Anton and Konle, Leonard and Jannidis, Fotis},
year = {2023},
}
```
|
aehrm/redewiedergabe-indirect
|
aehrm
| 2023-08-23T14:12:11Z | 5 | 0 |
flair
|
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"region:us"
] |
token-classification
| 2023-05-16T21:57:14Z |
---
tags:
- flair
- token-classification
- sequence-tagger-model
language: de
---
# REDEWIEDERGABE Tagger: indirect STWR
This model is part of an ensemble of binary taggers that recognize German speech, thought and writing representation, that is being used in [LLpro](https://github.com/cophi-wue/LLpro). They can be used to automatically detect and annotate the following 4 types of speech, thought and writing representation in German texts:
| STWR type | Example | Translation |
|--------------------------------|-------------------------------------------------------------------------|----------------------------------------------------------|
| direct | Dann sagte er: **"Ich habe Hunger."** | Then he said: **"I'm hungry."** |
| free indirect ('erlebte Rede') | Er war ratlos. **Woher sollte er denn hier bloß ein Mittagessen bekommen?** | He was at a loss. **Where should he ever find lunch here?** |
| indirect (**this tagger**) | Sie fragte, **wo das Essen sei.** | She asked **where the food was.** |
| reported | **Sie sprachen über das Mittagessen.** | **They talked about lunch.** |
The ensemble is trained on the [REDEWIEDERGABE corpus](https://github.com/redewiedergabe/corpus) ([Annotation guidelines](http://redewiedergabe.de/richtlinien/richtlinien.html)), fine-tuning each tagger on the domain-adapted [lkonle/fiction-gbert-large](https://huggingface.co/lkonle/fiction-gbert-large). ([Training Code](https://github.com/cophi-wue/LLpro/blob/main/contrib/train_redewiedergabe.py))
**F1-Scores:**
| STWR type | F1-Score |
|-----------|-----------|
| direct | 90.76 |
| **indirect (this tagger)** | **79.16** |
| free indirect | 58.00 |
| reported | 70.47 |
----
**Demo Usage:**
```python
from flair.data import Sentence
from flair.models import SequenceTagger
sentence = Sentence('Sie sprachen über das Mittagessen. Sie fragte, wo das Essen sei. Woher sollte er das wissen? Dann sagte er: "Ich habe Hunger."')
rwtypes = ['direct', 'indirect', 'freeindirect', 'reported']
for rwtype in rwtypes:
model = SequenceTagger.load(f'aehrm/redewiedergabe-{rwtype}')
model.predict(sentence)
print(rwtype, [ x.data_point.text for x in sentence.get_labels() ])
# >>> direct ['"', 'Ich', 'habe', 'Hunger', '.', '"']
# >>> indirect ['wo', 'das', 'Essen', 'sei', '.']
# >>> freeindirect ['Woher', 'sollte', 'er', 'das', 'wissen', '?']
# >>> reported ['Sie', 'sprachen', 'über', 'das', 'Mittagessen', '.', 'Woher', 'sollte', 'er', 'das', 'wissen', '?']
```
**Cite**:
Please cite the following paper when using this model.
```
@inproceedings{ehrmanntraut-et-al-llpro-2023,
address = {Ingolstadt, Germany},
title = {{LLpro}: A Literary Language Processing Pipeline for {German} Narrative Text},
booktitle = {Proceedings of the 10th Conference on Natural Language Processing ({KONVENS} 2022)},
publisher = {{KONVENS} 2023 Organizers},
author = {Ehrmanntraut, Anton and Konle, Leonard and Jannidis, Fotis},
year = {2023},
}
```
|
ThuyNT03/xlm-roberta-base-VietNam-train
|
ThuyNT03
| 2023-08-23T14:04:37Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-23T13:59:03Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-VietNam-train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-VietNam-train
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4360
- Accuracy: 0.84
- F1: 0.7909
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9668 | 1.0 | 44 | 0.8894 | 0.58 | 0.4258 |
| 0.8302 | 2.0 | 88 | 0.6932 | 0.59 | 0.4479 |
| 0.6805 | 3.0 | 132 | 0.5111 | 0.84 | 0.7875 |
| 0.5672 | 4.0 | 176 | 0.4705 | 0.85 | 0.7981 |
| 0.5517 | 5.0 | 220 | 0.4360 | 0.84 | 0.7909 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
daochf/Ludwig-Opt2_7b-PuceDS-v02
|
daochf
| 2023-08-23T13:54:21Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-23T13:54:20Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
ardt-multipart/ardt-multipart-arrl_sgld_train_hopper_high-2308_1414-66
|
ardt-multipart
| 2023-08-23T13:49:23Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-23T13:15:48Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-multipart-arrl_sgld_train_hopper_high-2308_1414-66
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-multipart-arrl_sgld_train_hopper_high-2308_1414-66
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hanifmz0711/online_shop_rating2
|
hanifmz0711
| 2023-08-23T13:48:33Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-23T13:45:01Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: hanifmz0711/online_shop_rating2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hanifmz0711/online_shop_rating2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: nan
- Validation Loss: nan
- Train Accuracy: 0.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 808, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| nan | nan | 0.0 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
ardt-multipart/ardt-multipart-arrl_sgld_train_halfcheetah_high-2308_1230-99
|
ardt-multipart
| 2023-08-23T13:47:19Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-23T11:32:01Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-multipart-arrl_sgld_train_halfcheetah_high-2308_1230-99
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-multipart-arrl_sgld_train_halfcheetah_high-2308_1230-99
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
nishant-glance/path-to-save-model-diffusion-2-1
|
nishant-glance
| 2023-08-23T13:19:14Z | 33 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:finetune:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-23T12:56:23Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - nishant-glance/path-to-save-model-diffusion-2-1
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Stomper10/textual_inversion_CXR_card
|
Stomper10
| 2023-08-23T13:17:28Z | 12 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-23T11:10:46Z |
---
license: creativeml-openrail-m
base_model: /shared/s1/lab06/wonyoung/diffusers/textual_inversion_CXR
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - Stomper10/textual_inversion_CXR_card
These are textual inversion adaption weights for /shared/s1/lab06/wonyoung/diffusers/textual_inversion_CXR. You can find some example images in the following.




















|
dheerajnarne/my-luffy
|
dheerajnarne
| 2023-08-23T13:03:54Z | 3 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-23T12:59:53Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-LUFFY Dreambooth model trained by dheerajnarne following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: SNU-109
Sample pictures of this concept:



|
gabrielgme/results
|
gabrielgme
| 2023-08-23T12:46:36Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T21:39:56Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
wcfr/wav2vec2-conformer-rel-pos-base-cantonese
|
wcfr
| 2023-08-23T12:42:15Z | 51 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2-conformer",
"pretraining",
"yue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-23T11:17:05Z |
---
license: apache-2.0
language:
- yue
library_name: transformers
---
# Cantonese Wav2Vec2-Conformer-Base with Relative Position Embeddings
wav2vec 2.0 Conformer with relative position embeddings, pretrained on
2.8K hours of Cantonese spontaneous speech data sampled at 16kHz.
Note: This model has not been fine-tuned on labeled text data.
## Alternative Version
An alternative version of the model which was pre-trained on the same dataset but
with setting `layer_norm_first` to `false` is available [here](https://drive.google.com/file/d/1rbP-6pZfR5ieqAwd5_X2KzipLuKpXSsQ/view?usp=sharing)
as a fairseq checkpoint and may give better downstream results.
## Citation
Please cite the following paper if you use the model.
```
@inproceedings{huang23h_interspeech,
author={Ranzo Huang and Brian Mak},
title={{wav2vec 2.0 ASR for Cantonese-Speaking Older Adults in a Clinical Setting}},
year=2023,
booktitle={Proc. INTERSPEECH 2023},
pages={4958--4962},
doi={10.21437/Interspeech.2023-2470}
}
```
|
red1xe/Llama-2-7B-codeGPT-v2
|
red1xe
| 2023-08-23T12:42:01Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:TokenBender/llama2-7b-chat-hf-codeCherryPop-qLoRA-merged",
"base_model:finetune:TokenBender/llama2-7b-chat-hf-codeCherryPop-qLoRA-merged",
"region:us"
] | null | 2023-08-23T12:20:02Z |
---
base_model: TokenBender/llama2-7b-chat-hf-codeCherryPop-qLoRA-merged
tags:
- generated_from_trainer
model-index:
- name: Llama-2-7B-codeGPT-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7B-codeGPT-v2
This model is a fine-tuned version of [TokenBender/llama2-7b-chat-hf-codeCherryPop-qLoRA-merged](https://huggingface.co/TokenBender/llama2-7b-chat-hf-codeCherryPop-qLoRA-merged) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 150
### Training results
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mastergyp/llama2-qlora-finetunined-french
|
mastergyp
| 2023-08-23T12:30:50Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-23T12:30:33Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
newbking/miaokaRealityMIX_miaokaRealityMixV10
|
newbking
| 2023-08-23T12:25:31Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-23T12:25:31Z |
---
license: creativeml-openrail-m
---
|
ThuyNT03/xlm-roberta-base-Mixed-aug_insert_vi
|
ThuyNT03
| 2023-08-23T12:19:24Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-23T12:09:04Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Mixed-aug_insert_vi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Mixed-aug_insert_vi
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5693
- Accuracy: 0.81
- F1: 0.7858
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9098 | 1.0 | 82 | 0.6306 | 0.75 | 0.7083 |
| 0.6867 | 2.0 | 164 | 0.7511 | 0.77 | 0.7175 |
| 0.5754 | 3.0 | 246 | 0.5041 | 0.82 | 0.7719 |
| 0.4309 | 4.0 | 328 | 0.5971 | 0.8 | 0.7754 |
| 0.3739 | 5.0 | 410 | 0.5693 | 0.81 | 0.7858 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
jjyaoao/Echotune_clean_test
|
jjyaoao
| 2023-08-23T12:10:19Z | 25 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"data2vec-audio",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"base_model:facebook/data2vec-audio-base-960h",
"base_model:finetune:facebook/data2vec-audio-base-960h",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-07T10:45:44Z |
---
license: apache-2.0
base_model: facebook/data2vec-audio-base-960h
tags:
- generated_from_trainer
datasets:
- librispeech_asr
metrics:
- wer
model-index:
- name: jjyaoao/Echotune_clean_test
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: librispeech_asr
type: librispeech_asr
config: clean
split: test
args: clean
metrics:
- name: Wer
type: wer
value: 0.037368222891566265
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jjyaoao/Echotune_clean_test
This model is a fine-tuned version of [facebook/data2vec-audio-base-960h](https://huggingface.co/facebook/data2vec-audio-base-960h) on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0679
- Wer Ortho: 0.0369
- Wer: 0.0374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 34246.8
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|
| 0.0602 | 0.21 | 500 | 0.0476 | 0.0435 | 0.0439 |
| 0.0478 | 0.42 | 1000 | 0.0436 | 0.0411 | 0.0414 |
| 0.0492 | 0.63 | 1500 | 0.0443 | 0.0412 | 0.0415 |
| 0.0426 | 0.84 | 2000 | 0.0439 | 0.0401 | 0.0403 |
| 0.0386 | 1.05 | 2500 | 0.0445 | 0.0391 | 0.0395 |
| 0.0409 | 1.26 | 3000 | 0.0438 | 0.0394 | 0.0399 |
| 0.0437 | 1.47 | 3500 | 0.0444 | 0.0389 | 0.0393 |
| 0.0349 | 1.68 | 4000 | 0.0450 | 0.0392 | 0.0396 |
| 0.0469 | 1.89 | 4500 | 0.0442 | 0.0374 | 0.0378 |
| 0.033 | 2.1 | 5000 | 0.0454 | 0.0359 | 0.0363 |
| 0.0395 | 2.31 | 5500 | 0.0462 | 0.0363 | 0.0367 |
| 0.0321 | 2.52 | 6000 | 0.0457 | 0.0365 | 0.0369 |
| 0.0385 | 2.73 | 6500 | 0.0455 | 0.0355 | 0.0358 |
| 0.0378 | 2.94 | 7000 | 0.0449 | 0.0361 | 0.0366 |
| 0.0435 | 3.15 | 7500 | 0.0440 | 0.0355 | 0.0360 |
| 0.0436 | 3.36 | 8000 | 0.0466 | 0.0339 | 0.0344 |
| 0.0394 | 3.57 | 8500 | 0.0480 | 0.0345 | 0.0350 |
| 0.0448 | 3.78 | 9000 | 0.0478 | 0.0338 | 0.0342 |
| 0.0451 | 3.99 | 9500 | 0.0460 | 0.0355 | 0.0361 |
| 0.035 | 4.2 | 10000 | 0.0485 | 0.0369 | 0.0374 |
| 0.0387 | 4.41 | 10500 | 0.0487 | 0.0358 | 0.0362 |
| 0.0479 | 4.62 | 11000 | 0.0496 | 0.0363 | 0.0368 |
| 0.0456 | 4.83 | 11500 | 0.0491 | 0.0359 | 0.0365 |
| 0.0372 | 5.04 | 12000 | 0.0507 | 0.0355 | 0.0360 |
| 0.0395 | 5.25 | 12500 | 0.0526 | 0.0353 | 0.0356 |
| 0.0323 | 5.46 | 13000 | 0.0515 | 0.0368 | 0.0373 |
| 0.0354 | 5.67 | 13500 | 0.0524 | 0.0338 | 0.0343 |
| 0.031 | 5.88 | 14000 | 0.0531 | 0.0349 | 0.0357 |
| 0.0295 | 6.09 | 14500 | 0.0560 | 0.0344 | 0.0349 |
| 0.032 | 6.31 | 15000 | 0.0564 | 0.0364 | 0.0369 |
| 0.0462 | 6.52 | 15500 | 0.0548 | 0.0358 | 0.0365 |
| 0.0467 | 6.73 | 16000 | 0.0562 | 0.0347 | 0.0352 |
| 0.0437 | 6.94 | 16500 | 0.0573 | 0.0354 | 0.0359 |
| 0.0357 | 7.15 | 17000 | 0.0561 | 0.0359 | 0.0362 |
| 0.0297 | 7.36 | 17500 | 0.0602 | 0.0347 | 0.0351 |
| 0.0388 | 7.57 | 18000 | 0.0552 | 0.0341 | 0.0345 |
| 0.0392 | 7.78 | 18500 | 0.0533 | 0.0326 | 0.0331 |
| 0.0419 | 7.99 | 19000 | 0.0535 | 0.0343 | 0.0349 |
| 0.0326 | 8.2 | 19500 | 0.0614 | 0.0374 | 0.0378 |
| 0.0423 | 8.41 | 20000 | 0.0585 | 0.0341 | 0.0346 |
| 0.0326 | 8.62 | 20500 | 0.0586 | 0.0356 | 0.0362 |
| 0.0448 | 8.83 | 21000 | 0.0637 | 0.0371 | 0.0375 |
| 0.0763 | 9.04 | 21500 | 0.0607 | 0.0359 | 0.0364 |
| 0.0317 | 9.25 | 22000 | 0.0635 | 0.0400 | 0.0405 |
| 0.0326 | 9.46 | 22500 | 0.0603 | 0.0368 | 0.0372 |
| 0.0393 | 9.67 | 23000 | 0.0665 | 0.0380 | 0.0385 |
| 0.0341 | 9.88 | 23500 | 0.0664 | 0.0408 | 0.0413 |
| 0.0351 | 10.09 | 24000 | 0.0638 | 0.0384 | 0.0388 |
| 0.0412 | 10.3 | 24500 | 0.0687 | 0.0380 | 0.0384 |
| 0.0359 | 10.51 | 25000 | 0.0634 | 0.0379 | 0.0385 |
| 0.047 | 10.72 | 25500 | 0.0652 | 0.0373 | 0.0378 |
| 0.0346 | 10.93 | 26000 | 0.0671 | 0.0390 | 0.0396 |
| 0.0366 | 11.14 | 26500 | 0.0664 | 0.0387 | 0.0393 |
| 0.0359 | 11.35 | 27000 | 0.0669 | 0.0369 | 0.0374 |
| 0.0366 | 11.56 | 27500 | 0.0705 | 0.0358 | 0.0364 |
| 0.054 | 11.77 | 28000 | 0.0659 | 0.0383 | 0.0390 |
| 0.0335 | 11.98 | 28500 | 0.0679 | 0.0369 | 0.0374 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
makande/llama2-qlora-finetunined-french
|
makande
| 2023-08-23T12:03:12Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-23T11:52:48Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
ardt-multipart/ardt-multipart-arrl_train_hopper_high-2308_1216-66
|
ardt-multipart
| 2023-08-23T11:57:12Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-23T11:17:57Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-multipart-arrl_train_hopper_high-2308_1216-66
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-multipart-arrl_train_hopper_high-2308_1216-66
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CiroN2022/retro-rocket
|
CiroN2022
| 2023-08-23T11:52:25Z | 6 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-08-23T11:52:21Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: retro_rocket
widget:
- text: retro_rocket
---
# Retro Rocket

None
## Image examples for the model:









|
CiroN2022/alchemy
|
CiroN2022
| 2023-08-23T11:52:10Z | 1 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-08-23T11:52:02Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: alchemy
widget:
- text: alchemy
---
# Alchemy

<p>Introducing Alchemy Model: Unleashing the Art of Alchemy</p><p>Alchemy Model, driven through 15 epochs and 2480 steps, is an AI model inspired by the captivating world of alchemy art. Drawing from the rich symbolism and mystical aesthetics of alchemy, this model possesses the ability to generate mesmerizing and enchanting images. By harnessing the intricate patterns, esoteric symbols, and vibrant color palettes associated with alchemical art, Alchemy Model empowers users to unlock their creative potential and explore the realms of artistic transformation.</p>
## Image examples for the model:









|
CiroN2022/echoes
|
CiroN2022
| 2023-08-23T11:51:48Z | 14 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-08-23T11:51:45Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Echoes
widget:
- text: Echoes
---
# Echoes

None
## Image examples for the model:









|
CiroN2022/anipunks
|
CiroN2022
| 2023-08-23T11:50:59Z | 2 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-08-23T11:50:56Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: anipunks
widget:
- text: anipunks
---
# AniPunks

None
## Image examples for the model:









|
CiroN2022/ascii-art
|
CiroN2022
| 2023-08-23T11:50:21Z | 768 | 12 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-08-23T11:50:18Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: ascii_art
widget:
- text: ascii_art
---
# Ascii Art

None
## Image examples for the model:









|
CiroN2022/ouija
|
CiroN2022
| 2023-08-23T11:48:56Z | 3 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-08-23T11:48:50Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: ouija
widget:
- text: ouija
---
# Ouija

None
## Image examples for the model:









|
CiroN2022/face-robotics
|
CiroN2022
| 2023-08-23T11:47:09Z | 5 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-08-23T11:47:06Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt:
widget:
- text:
---
# Face Robotics

None
## Image examples for the model:









|
CiroN2022/xenomorph-book
|
CiroN2022
| 2023-08-23T11:46:24Z | 1 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-08-23T11:46:21Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt:
widget:
- text:
---
# Xenomorph Book

None
## Image examples for the model:









|
CiroN2022/alien-god
|
CiroN2022
| 2023-08-23T11:43:37Z | 5 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-08-23T11:43:34Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt:
widget:
- text:
---
# Alien God

None
## Image examples for the model:









|
CiroN2022/cyber-graphic
|
CiroN2022
| 2023-08-23T11:42:35Z | 4 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-08-23T11:42:31Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt:
widget:
- text:
---
# Cyber Graphic

<p>Introducing Cyber Graphic Model: An AI Model for Cyberpunk and Graphic Art</p><p>Cyber Graphic Model is designed to generate captivating computer art, poster art, and cyberpunk-inspired visuals.</p><p><strong><span style="color:#fa5252">the SDXL version works on its own without any other loRAs</span></strong></p><p><strong><span style="color:#fa5252">only for 1.5 versions :</span><span style="color:rgb(250, 176, 5)"> To achieve the best results in your creative endeavors, a winning combination involves utilizing both the "fine-tuned" version and the "general style" models. Each model plays a specific and complementary role, enhancing the overall output and providing a powerful toolset to unlock your creative potential.</span></strong></p><p>The training of Cyber Graphic Model utilized a carefully curated dataset, consisting of a wide range of computer art, poster art, and cyberpunk-themed images. The dataset encompassed various styles, compositions, color palettes, and artistic techniques prevalent in the cyberpunk and graphic art genres.</p>
## Image examples for the model:









|
CiroN2022/skeleton-toy
|
CiroN2022
| 2023-08-23T11:42:14Z | 11 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-08-23T11:42:10Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt:
widget:
- text:
---
# Skeleton Toy

None
## Image examples for the model:









|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.