modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 06:31:37
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 06:31:07
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ckandemir/a2c-PandaReachDense-v3
|
ckandemir
| 2023-08-14T01:08:29Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-14T01:02:42Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.22 +/- 0.12
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kkmkorea/qlora-polyglot-12.8b
|
kkmkorea
| 2023-08-14T00:50:53Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-14T00:50:51Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
C-Lo/balanced_gendered-dataset
|
C-Lo
| 2023-08-14T00:21:59Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-14T00:18:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: balanced_gendered-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# balanced_gendered-dataset
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
MichaelYxWang/q-FrozenLake-v1-4x4-noSlippery
|
MichaelYxWang
| 2023-08-14T00:21:36Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-14T00:21:34Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="MichaelYxWang/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
johnowhitaker/gaussian_splatter_models
|
johnowhitaker
| 2023-08-14T00:14:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-13T23:58:24Z |
Adding models from https://github.com/graphdeco-inria/gaussian-splatting/ for easier access so you don't have to download the whole big models.zip file.
TODO: show usage
|
Nelver28/grailsolver-test-10
|
Nelver28
| 2023-08-14T00:13:43Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-14T00:13:27Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
HexHands/finishABOUTME
|
HexHands
| 2023-08-14T00:04:07Z | 153 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"en",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-01T01:56:24Z |
---
license: cc-by-4.0
language: en
tags:
- text-generation
pipeline_tag: text-generation
widget:
- text: "My name is "
- text: "I believe that I need to be more friendly."
- text: "Follow @griffpatch!"
- text: "How will my projects get better?"
---
# finishABOUTME
finishABOUTME is a torch model which was trained on 2000 Scratch About Me sections.
It is meant to finish any About Me section!
# Example
Input: This Scratch Studio will reach 100 followers in a few days!\n
Output: This Scratch Studio will reach 100 followers in a few days!\nThis studio here so much slower. Sorry for the inconveni have all, but we get every monday feel free to add projects about duckling Pond!\n\nThe Duckling Pond
|
nihal-tw/finetuned-f7b
|
nihal-tw
| 2023-08-13T23:49:41Z | 31 | 0 |
peft
|
[
"peft",
"medical",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-13T23:11:52Z |
---
library_name: peft
license: apache-2.0
pipeline_tag: text-generation
tags:
- medical
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
lockiultra/rating_model
|
lockiultra
| 2023-08-13T23:48:40Z | 67 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-13T23:45:04Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: rating_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# rating_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
javinfamous/infamous_rias_v2
|
javinfamous
| 2023-08-13T23:39:02Z | 0 | 0 | null |
[
"rvc",
"Audio-to-Audio",
"license:openrail",
"region:us"
] | null | 2023-08-13T14:56:05Z |
---
license: openrail
tags:
- rvc
- Audio-to-Audio
---
# Infamous_rias_v2 Model ID

## Model Details
This model of Rias Gremory from Highschool DxD was created with a 6 minute audio dataset, 32 epoch, trained on RVC V2.
- **Developed by:** javinfamous
|
nacielo/wav2BertMusicfreeze
|
nacielo
| 2023-08-13T23:30:28Z | 89 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T22:21:33Z |
---
base_model: ''
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: wav2BertMusicfreeze
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2BertMusicfreeze
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3867
- Rouge1: 27.1456
- Rouge2: 7.625
- Rougel: 20.1034
- Rougelsum: 20.0485
- Gen Len: 46.26
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 5.1626 | 1.0 | 1361 | 3.9917 | 24.7464 | 5.7911 | 18.5211 | 18.5124 | 65.15 |
| 3.4372 | 2.0 | 2722 | 3.0113 | 24.0633 | 5.6872 | 18.4731 | 18.4535 | 40.02 |
| 2.9324 | 3.0 | 4083 | 2.6271 | 32.2681 | 8.0887 | 23.541 | 23.4982 | 54.76 |
| 2.7227 | 4.0 | 5444 | 2.4558 | 29.1184 | 6.5853 | 21.5936 | 21.5896 | 48.21 |
| 2.6377 | 5.0 | 6805 | 2.3867 | 27.1456 | 7.625 | 20.1034 | 20.0485 | 46.26 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.2
- Tokenizers 0.13.3
|
AOLCDROM/WAV2LIP-HQ-Updated-MIRROR
|
AOLCDROM
| 2023-08-13T23:22:41Z | 0 | 3 | null |
[
"region:us"
] | null | 2023-08-13T23:14:06Z |
This is a mirror of the weights for the Wav2Lip-HQ-Updated repo, because the linked files on Google Drive appear to be incorrect or down.
License follows oriignal authors intent.
---
license: other
---
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e8_s55555_v4_l5_v50
|
KingKazma
| 2023-08-13T23:15:48Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T23:15:45Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
AmelieSchreiber/esm2_t12_35M_UR50D_RNA_LoRA_weighted
|
AmelieSchreiber
| 2023-08-13T23:13:58Z | 2 | 1 |
peft
|
[
"peft",
"transformers",
"biology",
"esm",
"esm2",
"protein",
"protein language model",
"en",
"license:mit",
"region:us"
] | null | 2023-08-13T23:01:51Z |
---
library_name: peft
license: mit
language:
- en
tags:
- transformers
- biology
- esm
- esm2
- protein
- protein language model
---
# ESM-2 RNA Binding Site LoRA
This is a Parameter Efficient Fine Tuning (PEFT) Low Rank Adaptation (LoRA) of
the [esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) model for the (binary) token classification task of
predicting RNA binding sites of proteins. You can also find a version of this model
that was fine-tuned without LoRA [here](https://huggingface.co/AmelieSchreiber/esm2_t6_8M_UR50D_rna_binding_site_predictor).
## Training procedure
This is a Low Rank Adaptation (LoRA) of `esm2_t12_35M_UR50D`,
trained on `166` protein sequences in the [RNA binding sites dataset](https://huggingface.co/datasets/AmelieSchreiber/data_of_protein-rna_binding_sites)
using a `85/15` train/test split. This model was trained with class weighting due to the imbalanced nature
of the RNA binding site dataset (fewer binding sites than non-binding sites). This model has slightly improved
precision, recall, and F1 score over [AmelieSchreiber/esm2_t12_35M_weighted_lora_rna_binding](https://huggingface.co/AmelieSchreiber/esm2_t12_35M_weighted_lora_rna_binding)
but may suffer from mild overfitting, as indicated by the training loss being slightly lower than the eval loss (see metrics below).
If you are searching for binding sites and aren't worried about false positives, the higher recall may make this model
preferable to the other RNA binding site predictors.
You can train your own version
using [this notebook](https://huggingface.co/AmelieSchreiber/esm2_t6_8M_weighted_lora_rna_binding/blob/main/LoRA_binding_sites_no_sweeps_v2.ipynb)!
You just need the RNA `binding_sites.xml` file [found here](https://huggingface.co/datasets/AmelieSchreiber/data_of_protein-rna_binding_sites).
You may also need to run some `pip install` statements at the beginning of the script. If you are running in colab run:
```python
!pip install transformers[torch] datasets peft -q
```
```python
!pip install accelerate -U -q
```
Try to improve upon these metrics by adjusting the hyperparameters:
```
{'eval_loss': 0.500779926776886,
'eval_precision': 0.1708695652173913,
'eval_recall': 0.8397435897435898,
'eval_f1': 0.2839595375722543,
'eval_auc': 0.771835775620126,
'epoch': 11.0}
{'loss': 0.4171,
'learning_rate': 0.00032491416877500004,
'epoch': 11.43}
```
A similar model can also be trained using the Github with a training script and conda env YAML, which can be
[found here](https://github.com/Amelie-Schreiber/esm2_LoRA_binding_sites/tree/main). This version uses wandb sweeps for hyperparameter search.
However, it does not use class weighting.
### Framework versions
- PEFT 0.4.0
## Using the Model
To use the model, try running the following pip install statements:
```python
!pip install transformers peft -q
```
then try tunning:
```python
from transformers import AutoModelForTokenClassification, AutoTokenizer
from peft import PeftModel
import torch
# Path to the saved LoRA model
model_path = "AmelieSchreiber/esm2_t12_35M_UR50D_RNA_LoRA_weighted"
# ESM2 base model
base_model_path = "facebook/esm2_t12_35M_UR50D"
# Load the model
base_model = AutoModelForTokenClassification.from_pretrained(base_model_path)
loaded_model = PeftModel.from_pretrained(base_model, model_path)
# Ensure the model is in evaluation mode
loaded_model.eval()
# Load the tokenizer
loaded_tokenizer = AutoTokenizer.from_pretrained(base_model_path)
# Protein sequence for inference
protein_sequence = "MAVPETRPNHTIYINNLNEKIKKDELKKSLHAIFSRFGQILDILVSRSLKMRGQAFVIFKEVSSATNALRSMQGFPFYDKPMRIQYAKTDSDIIAKMKGT" # Replace with your actual sequence
# Tokenize the sequence
inputs = loaded_tokenizer(protein_sequence, return_tensors="pt", truncation=True, max_length=1024, padding='max_length')
# Run the model
with torch.no_grad():
logits = loaded_model(**inputs).logits
# Get predictions
tokens = loaded_tokenizer.convert_ids_to_tokens(inputs["input_ids"][0]) # Convert input ids back to tokens
predictions = torch.argmax(logits, dim=2)
# Define labels
id2label = {
0: "No binding site",
1: "Binding site"
}
# Print the predicted labels for each token
for token, prediction in zip(tokens, predictions[0].numpy()):
if token not in ['<pad>', '<cls>', '<eos>']:
print((token, id2label[prediction]))
```
|
D4ve-R/yellow-lora-sd15
|
D4ve-R
| 2023-08-13T23:09:19Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-12T17:29:27Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - D4ve-R/yellow-lora-sd15
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following.




|
FireHead90544/RudraRVCs
|
FireHead90544
| 2023-08-13T23:08:19Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-08-09T15:39:45Z |
---
license: openrail
---
# RVCs - Some of the voices I trained
**Seiya Ryuuguuin - The Hero Is Overpowered But Overly Cautious (JP VA: Yuuichirou Umehara)**
Currently, these ones are available:
- ## [Seiya Ryuuguuin RVC v2 Mangio-Crepe (340 Epochs, 5440 Steps)](https://huggingface.co/FireHead90544/RudraRVCs/resolve/main/SeiyaRyuuguuinRVC.zip)
- ## [Seiya Ryuuguuin RVC v2 RMVPE (300 Epochs, 6300 Steps)](https://huggingface.co/FireHead90544/RudraRVCs/resolve/main/SeiyaRyuuguuinV2.zip) # This seems to perform better
- ## [Seiya Ryuuguuin Max RVC v2 RMVPE (400 Epochs, 8400 Steps)](https://huggingface.co/FireHead90544/RudraRVCs/resolve/main/SeiyaRyuuguuinMax.zip) # Probably the best one
## Samples
- ### Mangio-Crepe
- [NEFFEX - Cold](https://cdn.discordapp.com/attachments/1090766429785178142/1138861234561753249/Seiya_Ryuuguuin_-_Cold.mp3)
- [Kenshi Yonezu - Kick Back](https://cdn.discordapp.com/attachments/1090766429785178142/1138861234951819264/Seiya_Ryuuguuin_-_Kick_Back.mp3)
- ### RMVPE
- [YOASOBI - Running Into The Night](https://cdn.discordapp.com/attachments/549264174753120267/1138908849076703332/Seiya_Ryuuguuin_-_Racing_Into_The_Night.mp3)
- [Tk From Ling Tosite Sigure - Unravel](https://cdn.discordapp.com/attachments/549264174753120267/1138908849789734972/Seiya_Ryuuguuin_-_Unravel.mp3)
- [Jin Hashimoto - Stand Proud](https://cdn.discordapp.com/attachments/549264174753120267/1138908849424834741/Seiya_Ryuuguuin_-_Stand_Proud.mp3)
- [KSUKE - Contradiction](https://cdn.discordapp.com/attachments/549264174753120267/1138908848749551636/Seiya_Ryuuguuin_-_Contradiction.mp3)
- [Smash Mouth - All Star](https://cdn.discordapp.com/attachments/549264174753120267/1138908850137858189/Seiya_Ryuuguuin_-_All_Star.mp3)
- [OxT - Clattanoia](https://cdn.discordapp.com/attachments/549264174753120267/1138908850469216327/Seiya_Ryuuguuin_-_Clattanoia.mp3)
- <video controls width="640" height="360">
<source src="https://cdn.discordapp.com/attachments/1138965403658362910/1139679982717767870/Cupid.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
- <video controls width="640" height="360">
<source src="https://cdn.discordapp.com/attachments/1138965403658362910/1140419271772606474/Yoru_Ni_Kakeru.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
|
mchablani/llama-2-7b-mini-medqa
|
mchablani
| 2023-08-13T23:02:06Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T07:03:08Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
ittailup/lallama-13b-lora
|
ittailup
| 2023-08-13T22:55:06Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T22:53:53Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e5_s55555_v4_l5_v50
|
KingKazma
| 2023-08-13T22:53:17Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T22:53:13Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e4_s55555_v4_l5_v50
|
KingKazma
| 2023-08-13T22:45:46Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T22:45:43Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e4_s108_v4_l5_v50
|
KingKazma
| 2023-08-13T22:40:30Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T22:40:28Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e3_s55555_v4_l5_v50
|
KingKazma
| 2023-08-13T22:38:16Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T22:38:13Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
platzi/platzi-distilroberta-base-mrpc-glue-angrim
|
platzi
| 2023-08-13T22:31:39Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-13T21:44:25Z |
---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
widget:
- text: ["Yucaipa owned Dominick 's before selling the chain to Safeway in 1998 for $ 2.5 billion.",
"Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to Safeway for $ 1.8 billion in 1998."]
example_title: Not Equivalent
- text: ["Revenue in the first quarter of the year dropped 15 percent from the same period a year earlier.",
"With the scandal hanging over Stewart's company revenue the first quarter of the year dropped 15 percent from the same period a year earlier."]
example_title: Equivalent
model-index:
- name: platzi-distilroberta-base-mrpc-glue-angrim
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8284313725490197
- name: F1
type: f1
value: 0.8771929824561404
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-angrim
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets.
It achieves the following results on the evaluation set:
- Loss: 0.3994
- Accuracy: 0.8284
- F1: 0.8772
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5211 | 1.09 | 500 | 0.3994 | 0.8284 | 0.8772 |
| 0.3565 | 2.18 | 1000 | 0.5487 | 0.8456 | 0.8857 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e0_s55555_v4_l5_v50
|
KingKazma
| 2023-08-13T22:15:45Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T22:15:42Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
BrendaScar/dqn-SpaceInvadersNoFrameskip-v4
|
BrendaScar
| 2023-08-13T21:53:30Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-13T21:52:53Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 657.50 +/- 163.33
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga BrendaScar -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga BrendaScar -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga BrendaScar
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e9_s55555_v4_l5_v50
|
KingKazma
| 2023-08-13T21:47:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T21:47:38Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_5_e7_s55555_v4_l4_v100
|
KingKazma
| 2023-08-13T21:45:47Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T21:45:46Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
redstonehero/cetusmix_v4
|
redstonehero
| 2023-08-13T21:42:07Z | 751 | 4 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-13T20:31:26Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
redstonehero/angrarealflex_v20
|
redstonehero
| 2023-08-13T21:42:05Z | 29 | 0 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-13T20:38:38Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
SaranaAbidueva/mbart50_ru_bua
|
SaranaAbidueva
| 2023-08-13T21:38:06Z | 104 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"ru",
"bua",
"bxr",
"dataset:SaranaAbidueva/buryat-russian_parallel_corpus",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-11T10:42:25Z |
---
language:
- ru
- bua
- bxr
datasets:
- SaranaAbidueva/buryat-russian_parallel_corpus
metrics:
- bleu
---
This model translates from Russian to Buryat language.
How to use in Python:
```python
from transformers import MBartForConditionalGeneration, MBart50Tokenizer
model = MBartForConditionalGeneration.from_pretrained("SaranaAbidueva/mbart50_ru_bua")
tokenizer = MBart50Tokenizer.from_pretrained("SaranaAbidueva/mbart50_ru_bua")
def translate(text, max_length=200, num_beams=5, repetition_penalty=5.0, **kwargs):
encoded = tokenizer(text, return_tensors="pt")
generated_tokens = model.generate(
**encoded.to(model.device),
max_length=max_length,
num_beams=num_beams,
repetition_penalty=repetition_penalty
)
return tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0]
translate('Евгений Онегин интересная книга')
```
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_5_e6_s55555_v4_l4_v100
|
KingKazma
| 2023-08-13T21:37:10Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T21:37:09Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_5_e5_s55555_v4_l4_v100
|
KingKazma
| 2023-08-13T21:28:33Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T21:28:32Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e6_s55555_v4_l5_v50
|
KingKazma
| 2023-08-13T21:24:15Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T21:24:13Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
GeneralRincewind/ShakespeareGPT
|
GeneralRincewind
| 2023-08-13T21:20:51Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-13T05:59:18Z |
https://colab.research.google.com/drive/1Dlm8FA9JjjcqJIkfCagaIQWex8Ho5IKI#scrollTo=e8xIjRNsl3Bb
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("GeneralRincewind/ShakespeareGPT")
model = AutoModelForCausalLM.from_pretrained("GeneralRincewind/ShakespeareGPT")
#### Generate text
from transformers import TextStreamer
tokenized_text = tokenizer("", return_tensors="pt", truncation=True)
input_ids = tokenized_text.input_ids
streamer = TextStreamer(tokenizer)
model.eval()
full_completion = model.generate(inputs=tokenized_text["input_ids"].to("cuda"),
attention_mask=tokenized_text["attention_mask"].to("cuda"),
temperature=0.9,
top_k=80,
top_p=0.65,
do_sample=True,
streamer=streamer,
num_beams=1,
max_new_tokens=500,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
repetition_penalty=1)
decoded_text = tokenizer.decode(full_completion[0])
print(decoded_text)
```
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_5_e4_s55555_v4_l4_v100
|
KingKazma
| 2023-08-13T21:19:56Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T21:19:55Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e3_s55555_v4_l4_v100
|
KingKazma
| 2023-08-13T21:14:21Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T21:14:16Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_5_e3_s55555_v4_l4_v100
|
KingKazma
| 2023-08-13T21:11:18Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T18:29:22Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e4_s55555_v4_l5_v50
|
KingKazma
| 2023-08-13T21:08:38Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T21:08:36Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e9_s55555_v4_l4_v100
|
KingKazma
| 2023-08-13T21:07:52Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T21:07:51Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e2_s55555_v4_l4_v100
|
KingKazma
| 2023-08-13T21:07:34Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T21:07:30Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_5_e2_s55555_v4_l4_v100
|
KingKazma
| 2023-08-13T21:02:41Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T18:20:44Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
bigmorning/whisper_charsplit_new_round2__0060
|
bigmorning
| 2023-08-13T21:02:11Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T21:02:03Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round2__0060
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round2__0060
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0009
- Train Accuracy: 0.0795
- Train Wermet: 7.7512
- Validation Loss: 0.5624
- Validation Accuracy: 0.0770
- Validation Wermet: 6.7969
- Epoch: 59
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0010 | 0.0795 | 8.7507 | 0.5575 | 0.0767 | 7.6778 | 0 |
| 0.0013 | 0.0795 | 8.9468 | 0.5652 | 0.0766 | 8.3360 | 1 |
| 0.0025 | 0.0795 | 8.7338 | 0.5673 | 0.0765 | 8.3770 | 2 |
| 0.0019 | 0.0795 | 8.9450 | 0.5623 | 0.0766 | 7.7117 | 3 |
| 0.0011 | 0.0795 | 8.9053 | 0.5609 | 0.0767 | 7.5155 | 4 |
| 0.0012 | 0.0795 | 8.8862 | 0.5667 | 0.0767 | 8.2913 | 5 |
| 0.0009 | 0.0795 | 8.7510 | 0.5642 | 0.0766 | 7.9083 | 6 |
| 0.0037 | 0.0795 | 9.3428 | 0.5717 | 0.0764 | 8.2631 | 7 |
| 0.0031 | 0.0795 | 9.2135 | 0.5636 | 0.0766 | 8.2384 | 8 |
| 0.0011 | 0.0795 | 8.9730 | 0.5605 | 0.0767 | 8.3958 | 9 |
| 0.0005 | 0.0795 | 9.3749 | 0.5552 | 0.0768 | 8.0800 | 10 |
| 0.0003 | 0.0795 | 9.3340 | 0.5584 | 0.0768 | 8.1322 | 11 |
| 0.0005 | 0.0795 | 9.2292 | 0.5687 | 0.0767 | 8.5576 | 12 |
| 0.0037 | 0.0795 | 9.2838 | 0.5751 | 0.0765 | 7.4189 | 13 |
| 0.0038 | 0.0795 | 8.7270 | 0.5605 | 0.0767 | 7.7098 | 14 |
| 0.0012 | 0.0795 | 8.8259 | 0.5563 | 0.0768 | 8.2647 | 15 |
| 0.0005 | 0.0795 | 9.0553 | 0.5620 | 0.0768 | 8.5020 | 16 |
| 0.0004 | 0.0795 | 9.1734 | 0.5607 | 0.0768 | 8.0252 | 17 |
| 0.0003 | 0.0795 | 9.0084 | 0.5571 | 0.0769 | 8.1563 | 18 |
| 0.0014 | 0.0795 | 8.7153 | 0.5804 | 0.0765 | 7.8654 | 19 |
| 0.0058 | 0.0794 | 8.8460 | 0.5706 | 0.0766 | 7.4342 | 20 |
| 0.0020 | 0.0795 | 8.6599 | 0.5612 | 0.0767 | 7.7369 | 21 |
| 0.0007 | 0.0795 | 8.6456 | 0.5543 | 0.0768 | 7.4625 | 22 |
| 0.0008 | 0.0795 | 8.3246 | 0.5620 | 0.0768 | 7.4475 | 23 |
| 0.0012 | 0.0795 | 7.9451 | 0.5615 | 0.0768 | 7.0907 | 24 |
| 0.0025 | 0.0795 | 8.1065 | 0.5619 | 0.0768 | 7.7020 | 25 |
| 0.0011 | 0.0795 | 8.4237 | 0.5710 | 0.0768 | 7.4035 | 26 |
| 0.0009 | 0.0795 | 8.3074 | 0.5641 | 0.0768 | 7.1747 | 27 |
| 0.0007 | 0.0795 | 8.5183 | 0.5688 | 0.0768 | 7.4310 | 28 |
| 0.0014 | 0.0795 | 8.6604 | 0.5750 | 0.0767 | 8.0751 | 29 |
| 0.0022 | 0.0795 | 8.2353 | 0.5789 | 0.0767 | 7.4442 | 30 |
| 0.0019 | 0.0795 | 8.6037 | 0.5715 | 0.0767 | 7.6157 | 31 |
| 0.0009 | 0.0795 | 8.4768 | 0.5611 | 0.0769 | 7.6392 | 32 |
| 0.0005 | 0.0795 | 8.2728 | 0.5669 | 0.0768 | 7.1451 | 33 |
| 0.0010 | 0.0795 | 8.1006 | 0.5918 | 0.0766 | 7.4447 | 34 |
| 0.0036 | 0.0795 | 8.9171 | 0.5687 | 0.0767 | 7.6962 | 35 |
| 0.0018 | 0.0795 | 8.4062 | 0.5713 | 0.0768 | 7.2127 | 36 |
| 0.0012 | 0.0795 | 8.3370 | 0.5683 | 0.0768 | 7.1040 | 37 |
| 0.0005 | 0.0795 | 7.9931 | 0.5658 | 0.0769 | 6.8043 | 38 |
| 0.0002 | 0.0795 | 7.9500 | 0.5660 | 0.0769 | 7.0891 | 39 |
| 0.0001 | 0.0795 | 8.1912 | 0.5632 | 0.0770 | 7.1929 | 40 |
| 0.0001 | 0.0795 | 8.2484 | 0.5678 | 0.0769 | 7.6993 | 41 |
| 0.0001 | 0.0795 | 8.2925 | 0.5648 | 0.0770 | 7.1917 | 42 |
| 0.0001 | 0.0795 | 7.9155 | 0.5752 | 0.0769 | 6.4900 | 43 |
| 0.0095 | 0.0793 | 8.3244 | 0.5662 | 0.0767 | 6.9524 | 44 |
| 0.0019 | 0.0795 | 7.8491 | 0.5533 | 0.0769 | 6.9541 | 45 |
| 0.0006 | 0.0795 | 8.0596 | 0.5573 | 0.0768 | 6.9489 | 46 |
| 0.0008 | 0.0795 | 8.0277 | 0.5581 | 0.0769 | 6.9081 | 47 |
| 0.0005 | 0.0795 | 7.6084 | 0.5604 | 0.0769 | 6.7158 | 48 |
| 0.0006 | 0.0795 | 8.0561 | 0.5729 | 0.0767 | 7.4189 | 49 |
| 0.0014 | 0.0795 | 8.2875 | 0.5658 | 0.0768 | 7.5768 | 50 |
| 0.0011 | 0.0795 | 8.4376 | 0.5665 | 0.0768 | 7.2469 | 51 |
| 0.0018 | 0.0795 | 8.3093 | 0.5771 | 0.0768 | 7.2637 | 52 |
| 0.0021 | 0.0795 | 7.8370 | 0.5680 | 0.0768 | 7.0030 | 53 |
| 0.0014 | 0.0795 | 7.7408 | 0.5661 | 0.0769 | 7.1664 | 54 |
| 0.0009 | 0.0795 | 7.7601 | 0.5639 | 0.0769 | 6.9567 | 55 |
| 0.0006 | 0.0795 | 7.8589 | 0.5667 | 0.0769 | 7.3058 | 56 |
| 0.0013 | 0.0795 | 7.9766 | 0.5741 | 0.0768 | 6.8820 | 57 |
| 0.0027 | 0.0795 | 7.9402 | 0.5718 | 0.0768 | 7.0204 | 58 |
| 0.0009 | 0.0795 | 7.7512 | 0.5624 | 0.0770 | 6.7969 | 59 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e8_s55555_v4_l4_v100
|
KingKazma
| 2023-08-13T21:00:57Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T21:00:56Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e1_s55555_v4_l4_v100
|
KingKazma
| 2023-08-13T21:00:48Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T21:00:43Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
bigmorning/whisper_charsplit_new_round2__0059
|
bigmorning
| 2023-08-13T20:57:46Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T20:57:38Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round2__0059
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round2__0059
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0027
- Train Accuracy: 0.0795
- Train Wermet: 7.9402
- Validation Loss: 0.5718
- Validation Accuracy: 0.0768
- Validation Wermet: 7.0204
- Epoch: 58
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0010 | 0.0795 | 8.7507 | 0.5575 | 0.0767 | 7.6778 | 0 |
| 0.0013 | 0.0795 | 8.9468 | 0.5652 | 0.0766 | 8.3360 | 1 |
| 0.0025 | 0.0795 | 8.7338 | 0.5673 | 0.0765 | 8.3770 | 2 |
| 0.0019 | 0.0795 | 8.9450 | 0.5623 | 0.0766 | 7.7117 | 3 |
| 0.0011 | 0.0795 | 8.9053 | 0.5609 | 0.0767 | 7.5155 | 4 |
| 0.0012 | 0.0795 | 8.8862 | 0.5667 | 0.0767 | 8.2913 | 5 |
| 0.0009 | 0.0795 | 8.7510 | 0.5642 | 0.0766 | 7.9083 | 6 |
| 0.0037 | 0.0795 | 9.3428 | 0.5717 | 0.0764 | 8.2631 | 7 |
| 0.0031 | 0.0795 | 9.2135 | 0.5636 | 0.0766 | 8.2384 | 8 |
| 0.0011 | 0.0795 | 8.9730 | 0.5605 | 0.0767 | 8.3958 | 9 |
| 0.0005 | 0.0795 | 9.3749 | 0.5552 | 0.0768 | 8.0800 | 10 |
| 0.0003 | 0.0795 | 9.3340 | 0.5584 | 0.0768 | 8.1322 | 11 |
| 0.0005 | 0.0795 | 9.2292 | 0.5687 | 0.0767 | 8.5576 | 12 |
| 0.0037 | 0.0795 | 9.2838 | 0.5751 | 0.0765 | 7.4189 | 13 |
| 0.0038 | 0.0795 | 8.7270 | 0.5605 | 0.0767 | 7.7098 | 14 |
| 0.0012 | 0.0795 | 8.8259 | 0.5563 | 0.0768 | 8.2647 | 15 |
| 0.0005 | 0.0795 | 9.0553 | 0.5620 | 0.0768 | 8.5020 | 16 |
| 0.0004 | 0.0795 | 9.1734 | 0.5607 | 0.0768 | 8.0252 | 17 |
| 0.0003 | 0.0795 | 9.0084 | 0.5571 | 0.0769 | 8.1563 | 18 |
| 0.0014 | 0.0795 | 8.7153 | 0.5804 | 0.0765 | 7.8654 | 19 |
| 0.0058 | 0.0794 | 8.8460 | 0.5706 | 0.0766 | 7.4342 | 20 |
| 0.0020 | 0.0795 | 8.6599 | 0.5612 | 0.0767 | 7.7369 | 21 |
| 0.0007 | 0.0795 | 8.6456 | 0.5543 | 0.0768 | 7.4625 | 22 |
| 0.0008 | 0.0795 | 8.3246 | 0.5620 | 0.0768 | 7.4475 | 23 |
| 0.0012 | 0.0795 | 7.9451 | 0.5615 | 0.0768 | 7.0907 | 24 |
| 0.0025 | 0.0795 | 8.1065 | 0.5619 | 0.0768 | 7.7020 | 25 |
| 0.0011 | 0.0795 | 8.4237 | 0.5710 | 0.0768 | 7.4035 | 26 |
| 0.0009 | 0.0795 | 8.3074 | 0.5641 | 0.0768 | 7.1747 | 27 |
| 0.0007 | 0.0795 | 8.5183 | 0.5688 | 0.0768 | 7.4310 | 28 |
| 0.0014 | 0.0795 | 8.6604 | 0.5750 | 0.0767 | 8.0751 | 29 |
| 0.0022 | 0.0795 | 8.2353 | 0.5789 | 0.0767 | 7.4442 | 30 |
| 0.0019 | 0.0795 | 8.6037 | 0.5715 | 0.0767 | 7.6157 | 31 |
| 0.0009 | 0.0795 | 8.4768 | 0.5611 | 0.0769 | 7.6392 | 32 |
| 0.0005 | 0.0795 | 8.2728 | 0.5669 | 0.0768 | 7.1451 | 33 |
| 0.0010 | 0.0795 | 8.1006 | 0.5918 | 0.0766 | 7.4447 | 34 |
| 0.0036 | 0.0795 | 8.9171 | 0.5687 | 0.0767 | 7.6962 | 35 |
| 0.0018 | 0.0795 | 8.4062 | 0.5713 | 0.0768 | 7.2127 | 36 |
| 0.0012 | 0.0795 | 8.3370 | 0.5683 | 0.0768 | 7.1040 | 37 |
| 0.0005 | 0.0795 | 7.9931 | 0.5658 | 0.0769 | 6.8043 | 38 |
| 0.0002 | 0.0795 | 7.9500 | 0.5660 | 0.0769 | 7.0891 | 39 |
| 0.0001 | 0.0795 | 8.1912 | 0.5632 | 0.0770 | 7.1929 | 40 |
| 0.0001 | 0.0795 | 8.2484 | 0.5678 | 0.0769 | 7.6993 | 41 |
| 0.0001 | 0.0795 | 8.2925 | 0.5648 | 0.0770 | 7.1917 | 42 |
| 0.0001 | 0.0795 | 7.9155 | 0.5752 | 0.0769 | 6.4900 | 43 |
| 0.0095 | 0.0793 | 8.3244 | 0.5662 | 0.0767 | 6.9524 | 44 |
| 0.0019 | 0.0795 | 7.8491 | 0.5533 | 0.0769 | 6.9541 | 45 |
| 0.0006 | 0.0795 | 8.0596 | 0.5573 | 0.0768 | 6.9489 | 46 |
| 0.0008 | 0.0795 | 8.0277 | 0.5581 | 0.0769 | 6.9081 | 47 |
| 0.0005 | 0.0795 | 7.6084 | 0.5604 | 0.0769 | 6.7158 | 48 |
| 0.0006 | 0.0795 | 8.0561 | 0.5729 | 0.0767 | 7.4189 | 49 |
| 0.0014 | 0.0795 | 8.2875 | 0.5658 | 0.0768 | 7.5768 | 50 |
| 0.0011 | 0.0795 | 8.4376 | 0.5665 | 0.0768 | 7.2469 | 51 |
| 0.0018 | 0.0795 | 8.3093 | 0.5771 | 0.0768 | 7.2637 | 52 |
| 0.0021 | 0.0795 | 7.8370 | 0.5680 | 0.0768 | 7.0030 | 53 |
| 0.0014 | 0.0795 | 7.7408 | 0.5661 | 0.0769 | 7.1664 | 54 |
| 0.0009 | 0.0795 | 7.7601 | 0.5639 | 0.0769 | 6.9567 | 55 |
| 0.0006 | 0.0795 | 7.8589 | 0.5667 | 0.0769 | 7.3058 | 56 |
| 0.0013 | 0.0795 | 7.9766 | 0.5741 | 0.0768 | 6.8820 | 57 |
| 0.0027 | 0.0795 | 7.9402 | 0.5718 | 0.0768 | 7.0204 | 58 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_5_e1_s55555_v4_l4_v100
|
KingKazma
| 2023-08-13T20:54:05Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T18:12:04Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
CCatalao/respapers_topics
|
CCatalao
| 2023-08-13T20:53:46Z | 5 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2023-08-13T14:56:41Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# respapers_topics
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
This pre-trained model was built to demonstrate the use of representation model inspired on KeyBERT to be use within BERTopic.
This model was trained on ~30000 Research Papers abstracts with the KeyBERTInspired representation method (bertopic.representation).
The dataset was downloaded from [kaggle](https://www.kaggle.com/datasets/arashnic/urban-sound?resource=download&select=train_tm), with the two subsets (test and train) being merged into a single dataset.
To access the complete code, you can vist this tutorial on my GitHub page:
[ResPapers](https://github.com/ccatalao/respapers/blob/main/respapers.ipynb)
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("CCatalao/respapers_topics")
topic_model.get_topic_info()
```
To view the KeyBERT inspired topic representation please use the following:
```python
>>> topic_model.get_topic(0, full=True)
{'Main': [['spin', 0.01852648864225281],
['magnetic', 0.015019436257929909],
['phase', 0.013081733986038124],
['quantum', 0.012942253723133639],
['temperature', 0.012591407440537158],
['states', 0.011025582290837643],
['field', 0.010954775154251296],
['electron', 0.010168708734803916],
['transition', 0.009728560280580357],
['energy', 0.00937042795113575]],
'KeyBERTInspired': [['quantum', 0.4072583317756653],
['phase transition', 0.35542067885398865],
['lattice', 0.34462833404541016],
['spin', 0.3268473744392395],
['magnetic', 0.3024371564388275],
['magnetization', 0.2868726849555969],
['phases', 0.27178525924682617],
['fermi', 0.26290175318717957],
['electron', 0.25709500908851624],
['phase', 0.23375216126441956]]}
```
## Topic overview
* Number of topics: 112
* Number of training documents: 29961
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | data - model - paper - time - based | 20 | -1_data_model_paper_time |
| 0 | spin - magnetic - phase - quantum - temperature | 12937 | 0_spin_magnetic_phase_quantum |
| 1 | mass - star - stars - 10 - stellar | 3048 | 1_mass_star_stars_10 |
| 2 | reinforcement - reinforcement learning - learning - policy - robot | 2564 | 2_reinforcement_reinforcement learning_learning_policy |
| 3 | logic - semantics - programs - automata - languages | 556 | 3_logic_semantics_programs_automata |
| 4 | neural - networks - neural networks - deep - training | 478 | 4_neural_networks_neural networks_deep |
| 5 | networks - community - network - social - nodes | 405 | 5_networks_community_network_social |
| 6 | word - translation - language - words - sentence | 340 | 6_word_translation_language_words |
| 7 | object - 3d - camera - pose - localization | 298 | 7_object_3d_camera_pose |
| 8 | classification - label - classifier - learning - classifiers | 294 | 8_classification_label_classifier_learning |
| 9 | convex - gradient - stochastic - convergence - optimization | 287 | 9_convex_gradient_stochastic_convergence |
| 10 | graphs - graph - vertices - vertex - edge | 284 | 10_graphs_graph_vertices_vertex |
| 11 | brain - neurons - connectivity - neural - synaptic | 273 | 11_brain_neurons_connectivity_neural |
| 12 | robots - robot - planning - control - motion | 255 | 12_robots_robot_planning_control |
| 13 | prime - numbers - polynomials - integers - zeta | 245 | 13_prime_numbers_polynomials_integers |
| 14 | tensor - rank - matrix - low rank - pca | 226 | 14_tensor_rank_matrix_low rank |
| 15 | power - energy - grid - renewable - load | 222 | 15_power_energy_grid_renewable |
| 16 | channel - power - mimo - interference - wireless | 219 | 16_channel_power_mimo_interference |
| 17 | adversarial - attacks - adversarial examples - attack - examples | 208 | 17_adversarial_attacks_adversarial examples_attack |
| 18 | gan - gans - generative - generative adversarial - adversarial | 200 | 18_gan_gans_generative_generative adversarial |
| 19 | media - social - twitter - users - social media | 196 | 19_media_social_twitter_users |
| 20 | posterior - monte - monte carlo - carlo - bayesian | 190 | 20_posterior_monte_monte carlo_carlo |
| 21 | estimator - estimators - regression - quantile - estimation | 189 | 21_estimator_estimators_regression_quantile |
| 22 | software - code - developers - projects - development | 178 | 22_software_code_developers_projects |
| 23 | regret - bandit - armed - arm - multi armed | 177 | 23_regret_bandit_armed_arm |
| 24 | omega - mathbb - solutions - boundary - equation | 177 | 24_omega_mathbb_solutions_boundary |
| 25 | numerical - scheme - mesh - method - order | 175 | 25_numerical_scheme_mesh_method |
| 26 | causal - treatment - outcome - effects - causal inference | 174 | 26_causal_treatment_outcome_effects |
| 27 | curvature - mean curvature - riemannian - ricci - metric | 164 | 27_curvature_mean curvature_riemannian_ricci |
| 28 | control - distributed - systems - consensus - agents | 156 | 28_control_distributed_systems_consensus |
| 29 | groups - group - subgroup - subgroups - finite | 153 | 29_groups_group_subgroup_subgroups |
| 30 | segmentation - images - image - convolutional - medical | 148 | 30_segmentation_images_image_convolutional |
| 31 | market - portfolio - asset - price - volatility | 144 | 31_market_portfolio_asset_price |
| 32 | recommendation - user - item - items - recommender | 138 | 32_recommendation_user_item_items |
| 33 | algebra - algebras - lie - mathfrak - modules | 131 | 33_algebra_algebras_lie_mathfrak |
| 34 | quantum - classical - circuits - annealing - circuit | 121 | 34_quantum_classical_circuits_annealing |
| 35 | moduli - varieties - projective - curves - bundles | 119 | 35_moduli_varieties_projective_curves |
| 36 | graph - embedding - node - graphs - network | 117 | 36_graph_embedding_node_graphs |
| 37 | codes - decoding - channel - code - capacity | 113 | 37_codes_decoding_channel_code |
| 38 | sparse - signal - recovery - sensing - measurements | 107 | 38_sparse_signal_recovery_sensing |
| 39 | knot - knots - homology - invariants - link | 103 | 39_knot_knots_homology_invariants |
| 40 | spaces - hardy - operators - mathbb - boundedness | 95 | 40_spaces_hardy_operators_mathbb |
| 41 | blockchain - security - privacy - authentication - encryption | 90 | 41_blockchain_security_privacy_authentication |
| 42 | turbulence - turbulent - flow - flows - reynolds | 89 | 42_turbulence_turbulent_flow_flows |
| 43 | privacy - differential privacy - private - differential - data | 86 | 43_privacy_differential privacy_private_differential |
| 44 | epidemic - disease - infection - infected - infectious | 83 | 44_epidemic_disease_infection_infected |
| 45 | citation - scientific - research - journal - papers | 82 | 45_citation_scientific_research_journal |
| 46 | surface - droplet - fluid - liquid - droplets | 81 | 46_surface_droplet_fluid_liquid |
| 47 | chemical - molecules - molecular - protein - learning | 79 | 47_chemical_molecules_molecular_protein |
| 48 | kähler - manifolds - manifold - complex - metrics | 77 | 48_kähler_manifolds_manifold_complex |
| 49 | games - game - players - nash - player | 74 | 49_games_game_players_nash |
| 50 | patients - patient - clinical - ehr - care | 73 | 50_patients_patient_clinical_ehr |
| 51 | music - musical - audio - chord - note | 70 | 51_music_musical_audio_chord |
| 52 | visual - shot - image - cnns - learning | 70 | 52_visual_shot_image_cnns |
| 53 | speaker - speech - end - recognition - speech recognition | 70 | 53_speaker_speech_end_recognition |
| 54 | cell - cells - tissue - active - tumor | 69 | 54_cell_cells_tissue_active |
| 55 | eeg - brain - signals - sleep - subjects | 69 | 55_eeg_brain_signals_sleep |
| 56 | fairness - fair - discrimination - decision - algorithmic | 67 | 56_fairness_fair_discrimination_decision |
| 57 | clustering - clusters - data - based clustering - cluster | 66 | 57_clustering_clusters_data_based clustering |
| 58 | relativity - black - solutions - einstein - spacetime | 65 | 58_relativity_black_solutions_einstein |
| 59 | mathbb - curves - elliptic - conjecture - fields | 62 | 59_mathbb_curves_elliptic_conjecture |
| 60 | stokes - navier - navier stokes - equations - stokes equations | 61 | 60_stokes_navier_navier stokes_equations |
| 61 | species - population - dispersal - ecosystem - populations | 60 | 61_species_population_dispersal_ecosystem |
| 62 | reconstruction - ct - artifacts - image - images | 58 | 62_reconstruction_ct_artifacts_image |
| 63 | algebra - algebras - mathcal - alpha - crossed | 58 | 63_algebra_algebras_mathcal_alpha |
| 64 | tiling - polytopes - set - polygon - polytope | 58 | 64_tiling_polytopes_set_polygon |
| 65 | mobile - video - network - latency - computing | 57 | 65_mobile_video_network_latency |
| 66 | latent - variational - vae - generative - inference | 55 | 66_latent_variational_vae_generative |
| 67 | players - game - team - player - teams | 54 | 67_players_game_team_player |
| 68 | genes - gene - cancer - expression - sequencing | 53 | 68_genes_gene_cancer_expression |
| 69 | forcing - kappa - definable - cardinal - zfc | 51 | 69_forcing_kappa_definable_cardinal |
| 70 | dna - protein - folding - proteins - molecule | 50 | 70_dna_protein_folding_proteins |
| 71 | spaces - space - metric - metric spaces - topology | 49 | 71_spaces_space_metric_metric spaces |
| 72 | speech - separation - source separation - enhancement - speaker | 49 | 72_speech_separation_source separation_enhancement |
| 73 | imaging - resolution - light - diffraction - phase | 47 | 73_imaging_resolution_light_diffraction |
| 74 | traffic - traffic flow - prediction - temporal - transportation | 46 | 74_traffic_traffic flow_prediction_temporal |
| 75 | climate - precipitation - sea - flood - extreme | 45 | 75_climate_precipitation_sea_flood |
| 76 | audio - sound - event detection - event - bird | 43 | 76_audio_sound_event detection_event |
| 77 | memory - storage - cache - performance - write | 40 | 77_memory_storage_cache_performance |
| 78 | wishart - matrices - eigenvalue - free - smallest | 39 | 78_wishart_matrices_eigenvalue_free |
| 79 | domain - domain adaptation - adaptation - transfer - target | 39 | 79_domain_domain adaptation_adaptation_transfer |
| 80 | glass - glasses - glassy - amorphous - liquids | 39 | 80_glass_glasses_glassy_amorphous |
| 81 | gpu - gpus - nvidia - code - performance | 38 | 81_gpu_gpus_nvidia_code |
| 82 | face - face recognition - facial - recognition - faces | 38 | 82_face_face recognition_facial_recognition |
| 83 | stock - market - price - financial - stocks | 37 | 83_stock_market_price_financial |
| 84 | reaction - flux - metabolic - growth - biochemical | 34 | 84_reaction_flux_metabolic_growth |
| 85 | fleet - routing - vehicles - ride - traffic | 34 | 85_fleet_routing_vehicles_ride |
| 86 | cooperation - evolutionary - game - social - payoff | 33 | 86_cooperation_evolutionary_game_social |
| 87 | students - courses - student - course - education | 33 | 87_students_courses_student_course |
| 88 | action - temporal - video - recognition - videos | 33 | 88_action_temporal_video_recognition |
| 89 | irreducible - group - mathcal - representations - let | 32 | 89_irreducible_group_mathcal_representations |
| 90 | phylogenetic - tree - trees - species - gene | 32 | 90_phylogenetic_tree_trees_species |
| 91 | processes - drift - asymptotic - estimators - stationary | 31 | 91_processes_drift_asymptotic_estimators |
| 92 | wave - waves - water - free surface - shallow water | 30 | 92_wave_waves_water_free surface |
| 93 | distributed - gradient - byzantine - communication - sgd | 30 | 93_distributed_gradient_byzantine_communication |
| 94 | voters - voting - election - voter - winner | 30 | 94_voters_voting_election_voter |
| 95 | gaussian process - gaussian - gp - process - gaussian processes | 30 | 95_gaussian process_gaussian_gp_process |
| 96 | mathfrak - gorenstein - ring - rings - modules | 29 | 96_mathfrak_gorenstein_ring_rings |
| 97 | motivic - gw - cohomology - dm - category | 29 | 97_motivic_gw_cohomology_dm |
| 98 | recurrent - lstm - rnn - recurrent neural - memory | 28 | 98_recurrent_lstm_rnn_recurrent neural |
| 99 | semigroup - semigroups - xy - ordered - pt | 27 | 99_semigroup_semigroups_xy_ordered |
| 100 | robot - robots - human - human robot - children | 25 | 100_robot_robots_human_human robot |
| 101 | categories - category - homotopy - functor - grothendieck | 25 | 101_categories_category_homotopy_functor |
| 102 | queue - queues - server - scheduling - customer | 24 | 102_queue_queues_server_scheduling |
| 103 | topic - topics - topic modeling - lda - documents | 24 | 103_topic_topics_topic modeling_lda |
| 104 | synchronization - oscillators - chimera - coupling - coupled | 24 | 104_synchronization_oscillators_chimera_coupling |
| 105 | stochastic - existence - equation - solutions - uniqueness | 24 | 105_stochastic_existence_equation_solutions |
| 106 | fractional - derivative - derivatives - integral - psi | 23 | 106_fractional_derivative_derivatives_integral |
| 107 | lasso - regression - estimator - estimators - bootstrap | 23 | 107_lasso_regression_estimator_estimators |
| 108 | soil - moisture - machine - resolution - seismic | 22 | 108_soil_moisture_machine_resolution |
| 109 | bayesian optimization - optimization - acquisition - bayesian - bo | 21 | 109_bayesian optimization_optimization_acquisition_bayesian |
| 110 | urban - city - mobility - cities - social | 21 | 110_urban_city_mobility_cities |
</details>
## Training Procedure
The model was trained as follows:
```python
from bertopic import BERTopic
from sklearn.feature_extraction.text import CountVectorizer
from bertopic.representation import KeyBERTInspired
from sentence_transformers import SentenceTransformer
from umap import UMAP
from hdbscan import HDBSCAN
# Prepre sub-models
embedding_model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2')
umap_model = UMAP(n_components=5, n_neighbors=50, random_state=42, metric="cosine", verbose=True)
hdbscan_model = HDBSCAN(min_samples=20, gen_min_span_tree=True, prediction_data=False, min_cluster_size=20)
vectorizer_model = CountVectorizer(stop_words="english", ngram_range=(1, 3), min_df=5)
# Representation models
representation_models = {"KeyBERTInspired": KeyBERTInspired()}
# Fit BERTopic
topic_model = BERTopic(
umap_model=umap_model,
hdbscan_model=hdbscan_model,
vectorizer_model=vectorizer_model,
representation_model=representation_models,
min_topic_size= 10,
n_gram_range= (1, 1),
nr_topics=None,
seed_topic_list=None,
top_n_words=10,
calculate_probabilities=False,
language=None,
verbose = True
).fit(docs)
```
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: True
## Framework versions
* Numpy: 1.22.4
* HDBSCAN: 0.8.33
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.29.2
* Numba: 0.56.4
* Plotly: 5.13.1
* Python: 3.10.11
|
vj1148/lora-peft-holding-classification-cot
|
vj1148
| 2023-08-13T20:52:47Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T20:52:46Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_5_e0_s55555_v4_l4_v100
|
KingKazma
| 2023-08-13T20:45:28Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T18:03:26Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e1_s55555_v4_l5_v50
|
KingKazma
| 2023-08-13T20:45:13Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T20:12:11Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
bigmorning/whisper_charsplit_new_round2__0056
|
bigmorning
| 2023-08-13T20:44:40Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T20:44:32Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round2__0056
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round2__0056
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0009
- Train Accuracy: 0.0795
- Train Wermet: 7.7601
- Validation Loss: 0.5639
- Validation Accuracy: 0.0769
- Validation Wermet: 6.9567
- Epoch: 55
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0010 | 0.0795 | 8.7507 | 0.5575 | 0.0767 | 7.6778 | 0 |
| 0.0013 | 0.0795 | 8.9468 | 0.5652 | 0.0766 | 8.3360 | 1 |
| 0.0025 | 0.0795 | 8.7338 | 0.5673 | 0.0765 | 8.3770 | 2 |
| 0.0019 | 0.0795 | 8.9450 | 0.5623 | 0.0766 | 7.7117 | 3 |
| 0.0011 | 0.0795 | 8.9053 | 0.5609 | 0.0767 | 7.5155 | 4 |
| 0.0012 | 0.0795 | 8.8862 | 0.5667 | 0.0767 | 8.2913 | 5 |
| 0.0009 | 0.0795 | 8.7510 | 0.5642 | 0.0766 | 7.9083 | 6 |
| 0.0037 | 0.0795 | 9.3428 | 0.5717 | 0.0764 | 8.2631 | 7 |
| 0.0031 | 0.0795 | 9.2135 | 0.5636 | 0.0766 | 8.2384 | 8 |
| 0.0011 | 0.0795 | 8.9730 | 0.5605 | 0.0767 | 8.3958 | 9 |
| 0.0005 | 0.0795 | 9.3749 | 0.5552 | 0.0768 | 8.0800 | 10 |
| 0.0003 | 0.0795 | 9.3340 | 0.5584 | 0.0768 | 8.1322 | 11 |
| 0.0005 | 0.0795 | 9.2292 | 0.5687 | 0.0767 | 8.5576 | 12 |
| 0.0037 | 0.0795 | 9.2838 | 0.5751 | 0.0765 | 7.4189 | 13 |
| 0.0038 | 0.0795 | 8.7270 | 0.5605 | 0.0767 | 7.7098 | 14 |
| 0.0012 | 0.0795 | 8.8259 | 0.5563 | 0.0768 | 8.2647 | 15 |
| 0.0005 | 0.0795 | 9.0553 | 0.5620 | 0.0768 | 8.5020 | 16 |
| 0.0004 | 0.0795 | 9.1734 | 0.5607 | 0.0768 | 8.0252 | 17 |
| 0.0003 | 0.0795 | 9.0084 | 0.5571 | 0.0769 | 8.1563 | 18 |
| 0.0014 | 0.0795 | 8.7153 | 0.5804 | 0.0765 | 7.8654 | 19 |
| 0.0058 | 0.0794 | 8.8460 | 0.5706 | 0.0766 | 7.4342 | 20 |
| 0.0020 | 0.0795 | 8.6599 | 0.5612 | 0.0767 | 7.7369 | 21 |
| 0.0007 | 0.0795 | 8.6456 | 0.5543 | 0.0768 | 7.4625 | 22 |
| 0.0008 | 0.0795 | 8.3246 | 0.5620 | 0.0768 | 7.4475 | 23 |
| 0.0012 | 0.0795 | 7.9451 | 0.5615 | 0.0768 | 7.0907 | 24 |
| 0.0025 | 0.0795 | 8.1065 | 0.5619 | 0.0768 | 7.7020 | 25 |
| 0.0011 | 0.0795 | 8.4237 | 0.5710 | 0.0768 | 7.4035 | 26 |
| 0.0009 | 0.0795 | 8.3074 | 0.5641 | 0.0768 | 7.1747 | 27 |
| 0.0007 | 0.0795 | 8.5183 | 0.5688 | 0.0768 | 7.4310 | 28 |
| 0.0014 | 0.0795 | 8.6604 | 0.5750 | 0.0767 | 8.0751 | 29 |
| 0.0022 | 0.0795 | 8.2353 | 0.5789 | 0.0767 | 7.4442 | 30 |
| 0.0019 | 0.0795 | 8.6037 | 0.5715 | 0.0767 | 7.6157 | 31 |
| 0.0009 | 0.0795 | 8.4768 | 0.5611 | 0.0769 | 7.6392 | 32 |
| 0.0005 | 0.0795 | 8.2728 | 0.5669 | 0.0768 | 7.1451 | 33 |
| 0.0010 | 0.0795 | 8.1006 | 0.5918 | 0.0766 | 7.4447 | 34 |
| 0.0036 | 0.0795 | 8.9171 | 0.5687 | 0.0767 | 7.6962 | 35 |
| 0.0018 | 0.0795 | 8.4062 | 0.5713 | 0.0768 | 7.2127 | 36 |
| 0.0012 | 0.0795 | 8.3370 | 0.5683 | 0.0768 | 7.1040 | 37 |
| 0.0005 | 0.0795 | 7.9931 | 0.5658 | 0.0769 | 6.8043 | 38 |
| 0.0002 | 0.0795 | 7.9500 | 0.5660 | 0.0769 | 7.0891 | 39 |
| 0.0001 | 0.0795 | 8.1912 | 0.5632 | 0.0770 | 7.1929 | 40 |
| 0.0001 | 0.0795 | 8.2484 | 0.5678 | 0.0769 | 7.6993 | 41 |
| 0.0001 | 0.0795 | 8.2925 | 0.5648 | 0.0770 | 7.1917 | 42 |
| 0.0001 | 0.0795 | 7.9155 | 0.5752 | 0.0769 | 6.4900 | 43 |
| 0.0095 | 0.0793 | 8.3244 | 0.5662 | 0.0767 | 6.9524 | 44 |
| 0.0019 | 0.0795 | 7.8491 | 0.5533 | 0.0769 | 6.9541 | 45 |
| 0.0006 | 0.0795 | 8.0596 | 0.5573 | 0.0768 | 6.9489 | 46 |
| 0.0008 | 0.0795 | 8.0277 | 0.5581 | 0.0769 | 6.9081 | 47 |
| 0.0005 | 0.0795 | 7.6084 | 0.5604 | 0.0769 | 6.7158 | 48 |
| 0.0006 | 0.0795 | 8.0561 | 0.5729 | 0.0767 | 7.4189 | 49 |
| 0.0014 | 0.0795 | 8.2875 | 0.5658 | 0.0768 | 7.5768 | 50 |
| 0.0011 | 0.0795 | 8.4376 | 0.5665 | 0.0768 | 7.2469 | 51 |
| 0.0018 | 0.0795 | 8.3093 | 0.5771 | 0.0768 | 7.2637 | 52 |
| 0.0021 | 0.0795 | 7.8370 | 0.5680 | 0.0768 | 7.0030 | 53 |
| 0.0014 | 0.0795 | 7.7408 | 0.5661 | 0.0769 | 7.1664 | 54 |
| 0.0009 | 0.0795 | 7.7601 | 0.5639 | 0.0769 | 6.9567 | 55 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e9_s108_v4_l4_v100
|
KingKazma
| 2023-08-13T20:38:26Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T20:38:21Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e0_s55555_v4_l5_v50
|
KingKazma
| 2023-08-13T20:37:26Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T20:04:51Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
bigmorning/whisper_charsplit_new_round2__0054
|
bigmorning
| 2023-08-13T20:35:53Z | 58 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T20:35:47Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round2__0054
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round2__0054
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0021
- Train Accuracy: 0.0795
- Train Wermet: 7.8370
- Validation Loss: 0.5680
- Validation Accuracy: 0.0768
- Validation Wermet: 7.0030
- Epoch: 53
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0010 | 0.0795 | 8.7507 | 0.5575 | 0.0767 | 7.6778 | 0 |
| 0.0013 | 0.0795 | 8.9468 | 0.5652 | 0.0766 | 8.3360 | 1 |
| 0.0025 | 0.0795 | 8.7338 | 0.5673 | 0.0765 | 8.3770 | 2 |
| 0.0019 | 0.0795 | 8.9450 | 0.5623 | 0.0766 | 7.7117 | 3 |
| 0.0011 | 0.0795 | 8.9053 | 0.5609 | 0.0767 | 7.5155 | 4 |
| 0.0012 | 0.0795 | 8.8862 | 0.5667 | 0.0767 | 8.2913 | 5 |
| 0.0009 | 0.0795 | 8.7510 | 0.5642 | 0.0766 | 7.9083 | 6 |
| 0.0037 | 0.0795 | 9.3428 | 0.5717 | 0.0764 | 8.2631 | 7 |
| 0.0031 | 0.0795 | 9.2135 | 0.5636 | 0.0766 | 8.2384 | 8 |
| 0.0011 | 0.0795 | 8.9730 | 0.5605 | 0.0767 | 8.3958 | 9 |
| 0.0005 | 0.0795 | 9.3749 | 0.5552 | 0.0768 | 8.0800 | 10 |
| 0.0003 | 0.0795 | 9.3340 | 0.5584 | 0.0768 | 8.1322 | 11 |
| 0.0005 | 0.0795 | 9.2292 | 0.5687 | 0.0767 | 8.5576 | 12 |
| 0.0037 | 0.0795 | 9.2838 | 0.5751 | 0.0765 | 7.4189 | 13 |
| 0.0038 | 0.0795 | 8.7270 | 0.5605 | 0.0767 | 7.7098 | 14 |
| 0.0012 | 0.0795 | 8.8259 | 0.5563 | 0.0768 | 8.2647 | 15 |
| 0.0005 | 0.0795 | 9.0553 | 0.5620 | 0.0768 | 8.5020 | 16 |
| 0.0004 | 0.0795 | 9.1734 | 0.5607 | 0.0768 | 8.0252 | 17 |
| 0.0003 | 0.0795 | 9.0084 | 0.5571 | 0.0769 | 8.1563 | 18 |
| 0.0014 | 0.0795 | 8.7153 | 0.5804 | 0.0765 | 7.8654 | 19 |
| 0.0058 | 0.0794 | 8.8460 | 0.5706 | 0.0766 | 7.4342 | 20 |
| 0.0020 | 0.0795 | 8.6599 | 0.5612 | 0.0767 | 7.7369 | 21 |
| 0.0007 | 0.0795 | 8.6456 | 0.5543 | 0.0768 | 7.4625 | 22 |
| 0.0008 | 0.0795 | 8.3246 | 0.5620 | 0.0768 | 7.4475 | 23 |
| 0.0012 | 0.0795 | 7.9451 | 0.5615 | 0.0768 | 7.0907 | 24 |
| 0.0025 | 0.0795 | 8.1065 | 0.5619 | 0.0768 | 7.7020 | 25 |
| 0.0011 | 0.0795 | 8.4237 | 0.5710 | 0.0768 | 7.4035 | 26 |
| 0.0009 | 0.0795 | 8.3074 | 0.5641 | 0.0768 | 7.1747 | 27 |
| 0.0007 | 0.0795 | 8.5183 | 0.5688 | 0.0768 | 7.4310 | 28 |
| 0.0014 | 0.0795 | 8.6604 | 0.5750 | 0.0767 | 8.0751 | 29 |
| 0.0022 | 0.0795 | 8.2353 | 0.5789 | 0.0767 | 7.4442 | 30 |
| 0.0019 | 0.0795 | 8.6037 | 0.5715 | 0.0767 | 7.6157 | 31 |
| 0.0009 | 0.0795 | 8.4768 | 0.5611 | 0.0769 | 7.6392 | 32 |
| 0.0005 | 0.0795 | 8.2728 | 0.5669 | 0.0768 | 7.1451 | 33 |
| 0.0010 | 0.0795 | 8.1006 | 0.5918 | 0.0766 | 7.4447 | 34 |
| 0.0036 | 0.0795 | 8.9171 | 0.5687 | 0.0767 | 7.6962 | 35 |
| 0.0018 | 0.0795 | 8.4062 | 0.5713 | 0.0768 | 7.2127 | 36 |
| 0.0012 | 0.0795 | 8.3370 | 0.5683 | 0.0768 | 7.1040 | 37 |
| 0.0005 | 0.0795 | 7.9931 | 0.5658 | 0.0769 | 6.8043 | 38 |
| 0.0002 | 0.0795 | 7.9500 | 0.5660 | 0.0769 | 7.0891 | 39 |
| 0.0001 | 0.0795 | 8.1912 | 0.5632 | 0.0770 | 7.1929 | 40 |
| 0.0001 | 0.0795 | 8.2484 | 0.5678 | 0.0769 | 7.6993 | 41 |
| 0.0001 | 0.0795 | 8.2925 | 0.5648 | 0.0770 | 7.1917 | 42 |
| 0.0001 | 0.0795 | 7.9155 | 0.5752 | 0.0769 | 6.4900 | 43 |
| 0.0095 | 0.0793 | 8.3244 | 0.5662 | 0.0767 | 6.9524 | 44 |
| 0.0019 | 0.0795 | 7.8491 | 0.5533 | 0.0769 | 6.9541 | 45 |
| 0.0006 | 0.0795 | 8.0596 | 0.5573 | 0.0768 | 6.9489 | 46 |
| 0.0008 | 0.0795 | 8.0277 | 0.5581 | 0.0769 | 6.9081 | 47 |
| 0.0005 | 0.0795 | 7.6084 | 0.5604 | 0.0769 | 6.7158 | 48 |
| 0.0006 | 0.0795 | 8.0561 | 0.5729 | 0.0767 | 7.4189 | 49 |
| 0.0014 | 0.0795 | 8.2875 | 0.5658 | 0.0768 | 7.5768 | 50 |
| 0.0011 | 0.0795 | 8.4376 | 0.5665 | 0.0768 | 7.2469 | 51 |
| 0.0018 | 0.0795 | 8.3093 | 0.5771 | 0.0768 | 7.2637 | 52 |
| 0.0021 | 0.0795 | 7.8370 | 0.5680 | 0.0768 | 7.0030 | 53 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e4_s55555_v4_l4_v100
|
KingKazma
| 2023-08-13T20:33:12Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T20:33:11Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e8_s108_v4_l4_v100
|
KingKazma
| 2023-08-13T20:31:39Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T20:31:34Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
bigmorning/whisper_charsplit_new_round2__0053
|
bigmorning
| 2023-08-13T20:31:30Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T20:31:22Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round2__0053
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round2__0053
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0018
- Train Accuracy: 0.0795
- Train Wermet: 8.3093
- Validation Loss: 0.5771
- Validation Accuracy: 0.0768
- Validation Wermet: 7.2637
- Epoch: 52
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0010 | 0.0795 | 8.7507 | 0.5575 | 0.0767 | 7.6778 | 0 |
| 0.0013 | 0.0795 | 8.9468 | 0.5652 | 0.0766 | 8.3360 | 1 |
| 0.0025 | 0.0795 | 8.7338 | 0.5673 | 0.0765 | 8.3770 | 2 |
| 0.0019 | 0.0795 | 8.9450 | 0.5623 | 0.0766 | 7.7117 | 3 |
| 0.0011 | 0.0795 | 8.9053 | 0.5609 | 0.0767 | 7.5155 | 4 |
| 0.0012 | 0.0795 | 8.8862 | 0.5667 | 0.0767 | 8.2913 | 5 |
| 0.0009 | 0.0795 | 8.7510 | 0.5642 | 0.0766 | 7.9083 | 6 |
| 0.0037 | 0.0795 | 9.3428 | 0.5717 | 0.0764 | 8.2631 | 7 |
| 0.0031 | 0.0795 | 9.2135 | 0.5636 | 0.0766 | 8.2384 | 8 |
| 0.0011 | 0.0795 | 8.9730 | 0.5605 | 0.0767 | 8.3958 | 9 |
| 0.0005 | 0.0795 | 9.3749 | 0.5552 | 0.0768 | 8.0800 | 10 |
| 0.0003 | 0.0795 | 9.3340 | 0.5584 | 0.0768 | 8.1322 | 11 |
| 0.0005 | 0.0795 | 9.2292 | 0.5687 | 0.0767 | 8.5576 | 12 |
| 0.0037 | 0.0795 | 9.2838 | 0.5751 | 0.0765 | 7.4189 | 13 |
| 0.0038 | 0.0795 | 8.7270 | 0.5605 | 0.0767 | 7.7098 | 14 |
| 0.0012 | 0.0795 | 8.8259 | 0.5563 | 0.0768 | 8.2647 | 15 |
| 0.0005 | 0.0795 | 9.0553 | 0.5620 | 0.0768 | 8.5020 | 16 |
| 0.0004 | 0.0795 | 9.1734 | 0.5607 | 0.0768 | 8.0252 | 17 |
| 0.0003 | 0.0795 | 9.0084 | 0.5571 | 0.0769 | 8.1563 | 18 |
| 0.0014 | 0.0795 | 8.7153 | 0.5804 | 0.0765 | 7.8654 | 19 |
| 0.0058 | 0.0794 | 8.8460 | 0.5706 | 0.0766 | 7.4342 | 20 |
| 0.0020 | 0.0795 | 8.6599 | 0.5612 | 0.0767 | 7.7369 | 21 |
| 0.0007 | 0.0795 | 8.6456 | 0.5543 | 0.0768 | 7.4625 | 22 |
| 0.0008 | 0.0795 | 8.3246 | 0.5620 | 0.0768 | 7.4475 | 23 |
| 0.0012 | 0.0795 | 7.9451 | 0.5615 | 0.0768 | 7.0907 | 24 |
| 0.0025 | 0.0795 | 8.1065 | 0.5619 | 0.0768 | 7.7020 | 25 |
| 0.0011 | 0.0795 | 8.4237 | 0.5710 | 0.0768 | 7.4035 | 26 |
| 0.0009 | 0.0795 | 8.3074 | 0.5641 | 0.0768 | 7.1747 | 27 |
| 0.0007 | 0.0795 | 8.5183 | 0.5688 | 0.0768 | 7.4310 | 28 |
| 0.0014 | 0.0795 | 8.6604 | 0.5750 | 0.0767 | 8.0751 | 29 |
| 0.0022 | 0.0795 | 8.2353 | 0.5789 | 0.0767 | 7.4442 | 30 |
| 0.0019 | 0.0795 | 8.6037 | 0.5715 | 0.0767 | 7.6157 | 31 |
| 0.0009 | 0.0795 | 8.4768 | 0.5611 | 0.0769 | 7.6392 | 32 |
| 0.0005 | 0.0795 | 8.2728 | 0.5669 | 0.0768 | 7.1451 | 33 |
| 0.0010 | 0.0795 | 8.1006 | 0.5918 | 0.0766 | 7.4447 | 34 |
| 0.0036 | 0.0795 | 8.9171 | 0.5687 | 0.0767 | 7.6962 | 35 |
| 0.0018 | 0.0795 | 8.4062 | 0.5713 | 0.0768 | 7.2127 | 36 |
| 0.0012 | 0.0795 | 8.3370 | 0.5683 | 0.0768 | 7.1040 | 37 |
| 0.0005 | 0.0795 | 7.9931 | 0.5658 | 0.0769 | 6.8043 | 38 |
| 0.0002 | 0.0795 | 7.9500 | 0.5660 | 0.0769 | 7.0891 | 39 |
| 0.0001 | 0.0795 | 8.1912 | 0.5632 | 0.0770 | 7.1929 | 40 |
| 0.0001 | 0.0795 | 8.2484 | 0.5678 | 0.0769 | 7.6993 | 41 |
| 0.0001 | 0.0795 | 8.2925 | 0.5648 | 0.0770 | 7.1917 | 42 |
| 0.0001 | 0.0795 | 7.9155 | 0.5752 | 0.0769 | 6.4900 | 43 |
| 0.0095 | 0.0793 | 8.3244 | 0.5662 | 0.0767 | 6.9524 | 44 |
| 0.0019 | 0.0795 | 7.8491 | 0.5533 | 0.0769 | 6.9541 | 45 |
| 0.0006 | 0.0795 | 8.0596 | 0.5573 | 0.0768 | 6.9489 | 46 |
| 0.0008 | 0.0795 | 8.0277 | 0.5581 | 0.0769 | 6.9081 | 47 |
| 0.0005 | 0.0795 | 7.6084 | 0.5604 | 0.0769 | 6.7158 | 48 |
| 0.0006 | 0.0795 | 8.0561 | 0.5729 | 0.0767 | 7.4189 | 49 |
| 0.0014 | 0.0795 | 8.2875 | 0.5658 | 0.0768 | 7.5768 | 50 |
| 0.0011 | 0.0795 | 8.4376 | 0.5665 | 0.0768 | 7.2469 | 51 |
| 0.0018 | 0.0795 | 8.3093 | 0.5771 | 0.0768 | 7.2637 | 52 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e-1_s55555_v4_l5_v50
|
KingKazma
| 2023-08-13T20:29:44Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T19:57:34Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e3_s55555_v4_l4_v100
|
KingKazma
| 2023-08-13T20:26:17Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T20:26:16Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e2_s55555_v4_l4_v100
|
KingKazma
| 2023-08-13T20:19:21Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T20:19:20Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
bigmorning/whisper_charsplit_new_round2__0050
|
bigmorning
| 2023-08-13T20:18:38Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T20:18:18Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round2__0050
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round2__0050
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0006
- Train Accuracy: 0.0795
- Train Wermet: 8.0561
- Validation Loss: 0.5729
- Validation Accuracy: 0.0767
- Validation Wermet: 7.4189
- Epoch: 49
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0010 | 0.0795 | 8.7507 | 0.5575 | 0.0767 | 7.6778 | 0 |
| 0.0013 | 0.0795 | 8.9468 | 0.5652 | 0.0766 | 8.3360 | 1 |
| 0.0025 | 0.0795 | 8.7338 | 0.5673 | 0.0765 | 8.3770 | 2 |
| 0.0019 | 0.0795 | 8.9450 | 0.5623 | 0.0766 | 7.7117 | 3 |
| 0.0011 | 0.0795 | 8.9053 | 0.5609 | 0.0767 | 7.5155 | 4 |
| 0.0012 | 0.0795 | 8.8862 | 0.5667 | 0.0767 | 8.2913 | 5 |
| 0.0009 | 0.0795 | 8.7510 | 0.5642 | 0.0766 | 7.9083 | 6 |
| 0.0037 | 0.0795 | 9.3428 | 0.5717 | 0.0764 | 8.2631 | 7 |
| 0.0031 | 0.0795 | 9.2135 | 0.5636 | 0.0766 | 8.2384 | 8 |
| 0.0011 | 0.0795 | 8.9730 | 0.5605 | 0.0767 | 8.3958 | 9 |
| 0.0005 | 0.0795 | 9.3749 | 0.5552 | 0.0768 | 8.0800 | 10 |
| 0.0003 | 0.0795 | 9.3340 | 0.5584 | 0.0768 | 8.1322 | 11 |
| 0.0005 | 0.0795 | 9.2292 | 0.5687 | 0.0767 | 8.5576 | 12 |
| 0.0037 | 0.0795 | 9.2838 | 0.5751 | 0.0765 | 7.4189 | 13 |
| 0.0038 | 0.0795 | 8.7270 | 0.5605 | 0.0767 | 7.7098 | 14 |
| 0.0012 | 0.0795 | 8.8259 | 0.5563 | 0.0768 | 8.2647 | 15 |
| 0.0005 | 0.0795 | 9.0553 | 0.5620 | 0.0768 | 8.5020 | 16 |
| 0.0004 | 0.0795 | 9.1734 | 0.5607 | 0.0768 | 8.0252 | 17 |
| 0.0003 | 0.0795 | 9.0084 | 0.5571 | 0.0769 | 8.1563 | 18 |
| 0.0014 | 0.0795 | 8.7153 | 0.5804 | 0.0765 | 7.8654 | 19 |
| 0.0058 | 0.0794 | 8.8460 | 0.5706 | 0.0766 | 7.4342 | 20 |
| 0.0020 | 0.0795 | 8.6599 | 0.5612 | 0.0767 | 7.7369 | 21 |
| 0.0007 | 0.0795 | 8.6456 | 0.5543 | 0.0768 | 7.4625 | 22 |
| 0.0008 | 0.0795 | 8.3246 | 0.5620 | 0.0768 | 7.4475 | 23 |
| 0.0012 | 0.0795 | 7.9451 | 0.5615 | 0.0768 | 7.0907 | 24 |
| 0.0025 | 0.0795 | 8.1065 | 0.5619 | 0.0768 | 7.7020 | 25 |
| 0.0011 | 0.0795 | 8.4237 | 0.5710 | 0.0768 | 7.4035 | 26 |
| 0.0009 | 0.0795 | 8.3074 | 0.5641 | 0.0768 | 7.1747 | 27 |
| 0.0007 | 0.0795 | 8.5183 | 0.5688 | 0.0768 | 7.4310 | 28 |
| 0.0014 | 0.0795 | 8.6604 | 0.5750 | 0.0767 | 8.0751 | 29 |
| 0.0022 | 0.0795 | 8.2353 | 0.5789 | 0.0767 | 7.4442 | 30 |
| 0.0019 | 0.0795 | 8.6037 | 0.5715 | 0.0767 | 7.6157 | 31 |
| 0.0009 | 0.0795 | 8.4768 | 0.5611 | 0.0769 | 7.6392 | 32 |
| 0.0005 | 0.0795 | 8.2728 | 0.5669 | 0.0768 | 7.1451 | 33 |
| 0.0010 | 0.0795 | 8.1006 | 0.5918 | 0.0766 | 7.4447 | 34 |
| 0.0036 | 0.0795 | 8.9171 | 0.5687 | 0.0767 | 7.6962 | 35 |
| 0.0018 | 0.0795 | 8.4062 | 0.5713 | 0.0768 | 7.2127 | 36 |
| 0.0012 | 0.0795 | 8.3370 | 0.5683 | 0.0768 | 7.1040 | 37 |
| 0.0005 | 0.0795 | 7.9931 | 0.5658 | 0.0769 | 6.8043 | 38 |
| 0.0002 | 0.0795 | 7.9500 | 0.5660 | 0.0769 | 7.0891 | 39 |
| 0.0001 | 0.0795 | 8.1912 | 0.5632 | 0.0770 | 7.1929 | 40 |
| 0.0001 | 0.0795 | 8.2484 | 0.5678 | 0.0769 | 7.6993 | 41 |
| 0.0001 | 0.0795 | 8.2925 | 0.5648 | 0.0770 | 7.1917 | 42 |
| 0.0001 | 0.0795 | 7.9155 | 0.5752 | 0.0769 | 6.4900 | 43 |
| 0.0095 | 0.0793 | 8.3244 | 0.5662 | 0.0767 | 6.9524 | 44 |
| 0.0019 | 0.0795 | 7.8491 | 0.5533 | 0.0769 | 6.9541 | 45 |
| 0.0006 | 0.0795 | 8.0596 | 0.5573 | 0.0768 | 6.9489 | 46 |
| 0.0008 | 0.0795 | 8.0277 | 0.5581 | 0.0769 | 6.9081 | 47 |
| 0.0005 | 0.0795 | 7.6084 | 0.5604 | 0.0769 | 6.7158 | 48 |
| 0.0006 | 0.0795 | 8.0561 | 0.5729 | 0.0767 | 7.4189 | 49 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e1_s55555_v4_l4_v100
|
KingKazma
| 2023-08-13T20:12:26Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T20:12:24Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e5_s108_v4_l4_v100
|
KingKazma
| 2023-08-13T20:11:23Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T19:15:27Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
nikitamoshkov/ppo-LunarLander-v2
|
nikitamoshkov
| 2023-08-13T20:08:49Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-13T20:08:30Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.89 +/- 20.73
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e0_s55555_v4_l4_v100
|
KingKazma
| 2023-08-13T20:05:30Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T20:05:29Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
redstonehero/epicrealism_pureevolutionv5
|
redstonehero
| 2023-08-13T20:05:07Z | 54 | 0 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-13T18:39:14Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
redstonehero/lofi_v3
|
redstonehero
| 2023-08-13T20:05:07Z | 32 | 0 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-13T18:40:18Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
redstonehero/lifelikediffusionv30
|
redstonehero
| 2023-08-13T20:05:04Z | 29 | 2 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-13T18:45:52Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
redstonehero/m4rv3lsdungeonsv40
|
redstonehero
| 2023-08-13T20:05:01Z | 5 | 0 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-13T18:43:05Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
redstonehero/reliberate_v20
|
redstonehero
| 2023-08-13T20:04:44Z | 3 | 0 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-13T18:38:19Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_8_e8_s108_v4_l4_v100
|
KingKazma
| 2023-08-13T20:04:21Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T20:04:19Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
bigmorning/whisper_charsplit_new_round2__0046
|
bigmorning
| 2023-08-13T20:00:51Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T20:00:43Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round2__0046
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round2__0046
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0019
- Train Accuracy: 0.0795
- Train Wermet: 7.8491
- Validation Loss: 0.5533
- Validation Accuracy: 0.0769
- Validation Wermet: 6.9541
- Epoch: 45
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0010 | 0.0795 | 8.7507 | 0.5575 | 0.0767 | 7.6778 | 0 |
| 0.0013 | 0.0795 | 8.9468 | 0.5652 | 0.0766 | 8.3360 | 1 |
| 0.0025 | 0.0795 | 8.7338 | 0.5673 | 0.0765 | 8.3770 | 2 |
| 0.0019 | 0.0795 | 8.9450 | 0.5623 | 0.0766 | 7.7117 | 3 |
| 0.0011 | 0.0795 | 8.9053 | 0.5609 | 0.0767 | 7.5155 | 4 |
| 0.0012 | 0.0795 | 8.8862 | 0.5667 | 0.0767 | 8.2913 | 5 |
| 0.0009 | 0.0795 | 8.7510 | 0.5642 | 0.0766 | 7.9083 | 6 |
| 0.0037 | 0.0795 | 9.3428 | 0.5717 | 0.0764 | 8.2631 | 7 |
| 0.0031 | 0.0795 | 9.2135 | 0.5636 | 0.0766 | 8.2384 | 8 |
| 0.0011 | 0.0795 | 8.9730 | 0.5605 | 0.0767 | 8.3958 | 9 |
| 0.0005 | 0.0795 | 9.3749 | 0.5552 | 0.0768 | 8.0800 | 10 |
| 0.0003 | 0.0795 | 9.3340 | 0.5584 | 0.0768 | 8.1322 | 11 |
| 0.0005 | 0.0795 | 9.2292 | 0.5687 | 0.0767 | 8.5576 | 12 |
| 0.0037 | 0.0795 | 9.2838 | 0.5751 | 0.0765 | 7.4189 | 13 |
| 0.0038 | 0.0795 | 8.7270 | 0.5605 | 0.0767 | 7.7098 | 14 |
| 0.0012 | 0.0795 | 8.8259 | 0.5563 | 0.0768 | 8.2647 | 15 |
| 0.0005 | 0.0795 | 9.0553 | 0.5620 | 0.0768 | 8.5020 | 16 |
| 0.0004 | 0.0795 | 9.1734 | 0.5607 | 0.0768 | 8.0252 | 17 |
| 0.0003 | 0.0795 | 9.0084 | 0.5571 | 0.0769 | 8.1563 | 18 |
| 0.0014 | 0.0795 | 8.7153 | 0.5804 | 0.0765 | 7.8654 | 19 |
| 0.0058 | 0.0794 | 8.8460 | 0.5706 | 0.0766 | 7.4342 | 20 |
| 0.0020 | 0.0795 | 8.6599 | 0.5612 | 0.0767 | 7.7369 | 21 |
| 0.0007 | 0.0795 | 8.6456 | 0.5543 | 0.0768 | 7.4625 | 22 |
| 0.0008 | 0.0795 | 8.3246 | 0.5620 | 0.0768 | 7.4475 | 23 |
| 0.0012 | 0.0795 | 7.9451 | 0.5615 | 0.0768 | 7.0907 | 24 |
| 0.0025 | 0.0795 | 8.1065 | 0.5619 | 0.0768 | 7.7020 | 25 |
| 0.0011 | 0.0795 | 8.4237 | 0.5710 | 0.0768 | 7.4035 | 26 |
| 0.0009 | 0.0795 | 8.3074 | 0.5641 | 0.0768 | 7.1747 | 27 |
| 0.0007 | 0.0795 | 8.5183 | 0.5688 | 0.0768 | 7.4310 | 28 |
| 0.0014 | 0.0795 | 8.6604 | 0.5750 | 0.0767 | 8.0751 | 29 |
| 0.0022 | 0.0795 | 8.2353 | 0.5789 | 0.0767 | 7.4442 | 30 |
| 0.0019 | 0.0795 | 8.6037 | 0.5715 | 0.0767 | 7.6157 | 31 |
| 0.0009 | 0.0795 | 8.4768 | 0.5611 | 0.0769 | 7.6392 | 32 |
| 0.0005 | 0.0795 | 8.2728 | 0.5669 | 0.0768 | 7.1451 | 33 |
| 0.0010 | 0.0795 | 8.1006 | 0.5918 | 0.0766 | 7.4447 | 34 |
| 0.0036 | 0.0795 | 8.9171 | 0.5687 | 0.0767 | 7.6962 | 35 |
| 0.0018 | 0.0795 | 8.4062 | 0.5713 | 0.0768 | 7.2127 | 36 |
| 0.0012 | 0.0795 | 8.3370 | 0.5683 | 0.0768 | 7.1040 | 37 |
| 0.0005 | 0.0795 | 7.9931 | 0.5658 | 0.0769 | 6.8043 | 38 |
| 0.0002 | 0.0795 | 7.9500 | 0.5660 | 0.0769 | 7.0891 | 39 |
| 0.0001 | 0.0795 | 8.1912 | 0.5632 | 0.0770 | 7.1929 | 40 |
| 0.0001 | 0.0795 | 8.2484 | 0.5678 | 0.0769 | 7.6993 | 41 |
| 0.0001 | 0.0795 | 8.2925 | 0.5648 | 0.0770 | 7.1917 | 42 |
| 0.0001 | 0.0795 | 7.9155 | 0.5752 | 0.0769 | 6.4900 | 43 |
| 0.0095 | 0.0793 | 8.3244 | 0.5662 | 0.0767 | 6.9524 | 44 |
| 0.0019 | 0.0795 | 7.8491 | 0.5533 | 0.0769 | 6.9541 | 45 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e-1_s55555_v4_l4_v100
|
KingKazma
| 2023-08-13T19:58:35Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T19:58:34Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
sherif1311/flan-t5-base-intent
|
sherif1311
| 2023-08-13T19:57:32Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-13T17:45:07Z |
---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: flan-t5-base-intent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-intent
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- F1: 100.0
- Gen Len: 2.3333
## Model description
Use double quotation for any tweet. 0: Anti-tobacco 1: Neutral 2: Pro-tobacco
## Intended uses & limitations
The fine tuned model by STOP is intended for Anti-tobacco/ Pro-tobacco monitoring for social media.
## Training and evaluation data
The model was developed and fine tuned in STOP, University of Bath, UK
Data used is sherif1311/intend which was collected, augmented and trained by STOP.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 1.12.1+cu116
- Datasets 2.14.4
- Tokenizers 0.12.1
|
bigmorning/whisper_charsplit_new_round2__0045
|
bigmorning
| 2023-08-13T19:56:29Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T19:56:22Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round2__0045
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round2__0045
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0095
- Train Accuracy: 0.0793
- Train Wermet: 8.3244
- Validation Loss: 0.5662
- Validation Accuracy: 0.0767
- Validation Wermet: 6.9524
- Epoch: 44
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0010 | 0.0795 | 8.7507 | 0.5575 | 0.0767 | 7.6778 | 0 |
| 0.0013 | 0.0795 | 8.9468 | 0.5652 | 0.0766 | 8.3360 | 1 |
| 0.0025 | 0.0795 | 8.7338 | 0.5673 | 0.0765 | 8.3770 | 2 |
| 0.0019 | 0.0795 | 8.9450 | 0.5623 | 0.0766 | 7.7117 | 3 |
| 0.0011 | 0.0795 | 8.9053 | 0.5609 | 0.0767 | 7.5155 | 4 |
| 0.0012 | 0.0795 | 8.8862 | 0.5667 | 0.0767 | 8.2913 | 5 |
| 0.0009 | 0.0795 | 8.7510 | 0.5642 | 0.0766 | 7.9083 | 6 |
| 0.0037 | 0.0795 | 9.3428 | 0.5717 | 0.0764 | 8.2631 | 7 |
| 0.0031 | 0.0795 | 9.2135 | 0.5636 | 0.0766 | 8.2384 | 8 |
| 0.0011 | 0.0795 | 8.9730 | 0.5605 | 0.0767 | 8.3958 | 9 |
| 0.0005 | 0.0795 | 9.3749 | 0.5552 | 0.0768 | 8.0800 | 10 |
| 0.0003 | 0.0795 | 9.3340 | 0.5584 | 0.0768 | 8.1322 | 11 |
| 0.0005 | 0.0795 | 9.2292 | 0.5687 | 0.0767 | 8.5576 | 12 |
| 0.0037 | 0.0795 | 9.2838 | 0.5751 | 0.0765 | 7.4189 | 13 |
| 0.0038 | 0.0795 | 8.7270 | 0.5605 | 0.0767 | 7.7098 | 14 |
| 0.0012 | 0.0795 | 8.8259 | 0.5563 | 0.0768 | 8.2647 | 15 |
| 0.0005 | 0.0795 | 9.0553 | 0.5620 | 0.0768 | 8.5020 | 16 |
| 0.0004 | 0.0795 | 9.1734 | 0.5607 | 0.0768 | 8.0252 | 17 |
| 0.0003 | 0.0795 | 9.0084 | 0.5571 | 0.0769 | 8.1563 | 18 |
| 0.0014 | 0.0795 | 8.7153 | 0.5804 | 0.0765 | 7.8654 | 19 |
| 0.0058 | 0.0794 | 8.8460 | 0.5706 | 0.0766 | 7.4342 | 20 |
| 0.0020 | 0.0795 | 8.6599 | 0.5612 | 0.0767 | 7.7369 | 21 |
| 0.0007 | 0.0795 | 8.6456 | 0.5543 | 0.0768 | 7.4625 | 22 |
| 0.0008 | 0.0795 | 8.3246 | 0.5620 | 0.0768 | 7.4475 | 23 |
| 0.0012 | 0.0795 | 7.9451 | 0.5615 | 0.0768 | 7.0907 | 24 |
| 0.0025 | 0.0795 | 8.1065 | 0.5619 | 0.0768 | 7.7020 | 25 |
| 0.0011 | 0.0795 | 8.4237 | 0.5710 | 0.0768 | 7.4035 | 26 |
| 0.0009 | 0.0795 | 8.3074 | 0.5641 | 0.0768 | 7.1747 | 27 |
| 0.0007 | 0.0795 | 8.5183 | 0.5688 | 0.0768 | 7.4310 | 28 |
| 0.0014 | 0.0795 | 8.6604 | 0.5750 | 0.0767 | 8.0751 | 29 |
| 0.0022 | 0.0795 | 8.2353 | 0.5789 | 0.0767 | 7.4442 | 30 |
| 0.0019 | 0.0795 | 8.6037 | 0.5715 | 0.0767 | 7.6157 | 31 |
| 0.0009 | 0.0795 | 8.4768 | 0.5611 | 0.0769 | 7.6392 | 32 |
| 0.0005 | 0.0795 | 8.2728 | 0.5669 | 0.0768 | 7.1451 | 33 |
| 0.0010 | 0.0795 | 8.1006 | 0.5918 | 0.0766 | 7.4447 | 34 |
| 0.0036 | 0.0795 | 8.9171 | 0.5687 | 0.0767 | 7.6962 | 35 |
| 0.0018 | 0.0795 | 8.4062 | 0.5713 | 0.0768 | 7.2127 | 36 |
| 0.0012 | 0.0795 | 8.3370 | 0.5683 | 0.0768 | 7.1040 | 37 |
| 0.0005 | 0.0795 | 7.9931 | 0.5658 | 0.0769 | 6.8043 | 38 |
| 0.0002 | 0.0795 | 7.9500 | 0.5660 | 0.0769 | 7.0891 | 39 |
| 0.0001 | 0.0795 | 8.1912 | 0.5632 | 0.0770 | 7.1929 | 40 |
| 0.0001 | 0.0795 | 8.2484 | 0.5678 | 0.0769 | 7.6993 | 41 |
| 0.0001 | 0.0795 | 8.2925 | 0.5648 | 0.0770 | 7.1917 | 42 |
| 0.0001 | 0.0795 | 7.9155 | 0.5752 | 0.0769 | 6.4900 | 43 |
| 0.0095 | 0.0793 | 8.3244 | 0.5662 | 0.0767 | 6.9524 | 44 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
s3nh/flozi00-Llama-2-13B-german-assistant-v3-GGML
|
s3nh
| 2023-08-13T19:51:57Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-13T19:51:56Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/Photolens/OpenOrcaxOpenChat-2-13b-langchain-chat).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e2_s108_v4_l4_v100
|
KingKazma
| 2023-08-13T19:51:14Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T18:54:36Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e9_s108_v4_l5_v50
|
KingKazma
| 2023-08-13T19:47:47Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T19:47:45Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e1_s108_v4_l4_v100
|
KingKazma
| 2023-08-13T19:44:32Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T18:47:40Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
bigmorning/whisper_charsplit_new_round2__0042
|
bigmorning
| 2023-08-13T19:43:07Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T19:43:01Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round2__0042
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round2__0042
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0001
- Train Accuracy: 0.0795
- Train Wermet: 8.2484
- Validation Loss: 0.5678
- Validation Accuracy: 0.0769
- Validation Wermet: 7.6993
- Epoch: 41
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0010 | 0.0795 | 8.7507 | 0.5575 | 0.0767 | 7.6778 | 0 |
| 0.0013 | 0.0795 | 8.9468 | 0.5652 | 0.0766 | 8.3360 | 1 |
| 0.0025 | 0.0795 | 8.7338 | 0.5673 | 0.0765 | 8.3770 | 2 |
| 0.0019 | 0.0795 | 8.9450 | 0.5623 | 0.0766 | 7.7117 | 3 |
| 0.0011 | 0.0795 | 8.9053 | 0.5609 | 0.0767 | 7.5155 | 4 |
| 0.0012 | 0.0795 | 8.8862 | 0.5667 | 0.0767 | 8.2913 | 5 |
| 0.0009 | 0.0795 | 8.7510 | 0.5642 | 0.0766 | 7.9083 | 6 |
| 0.0037 | 0.0795 | 9.3428 | 0.5717 | 0.0764 | 8.2631 | 7 |
| 0.0031 | 0.0795 | 9.2135 | 0.5636 | 0.0766 | 8.2384 | 8 |
| 0.0011 | 0.0795 | 8.9730 | 0.5605 | 0.0767 | 8.3958 | 9 |
| 0.0005 | 0.0795 | 9.3749 | 0.5552 | 0.0768 | 8.0800 | 10 |
| 0.0003 | 0.0795 | 9.3340 | 0.5584 | 0.0768 | 8.1322 | 11 |
| 0.0005 | 0.0795 | 9.2292 | 0.5687 | 0.0767 | 8.5576 | 12 |
| 0.0037 | 0.0795 | 9.2838 | 0.5751 | 0.0765 | 7.4189 | 13 |
| 0.0038 | 0.0795 | 8.7270 | 0.5605 | 0.0767 | 7.7098 | 14 |
| 0.0012 | 0.0795 | 8.8259 | 0.5563 | 0.0768 | 8.2647 | 15 |
| 0.0005 | 0.0795 | 9.0553 | 0.5620 | 0.0768 | 8.5020 | 16 |
| 0.0004 | 0.0795 | 9.1734 | 0.5607 | 0.0768 | 8.0252 | 17 |
| 0.0003 | 0.0795 | 9.0084 | 0.5571 | 0.0769 | 8.1563 | 18 |
| 0.0014 | 0.0795 | 8.7153 | 0.5804 | 0.0765 | 7.8654 | 19 |
| 0.0058 | 0.0794 | 8.8460 | 0.5706 | 0.0766 | 7.4342 | 20 |
| 0.0020 | 0.0795 | 8.6599 | 0.5612 | 0.0767 | 7.7369 | 21 |
| 0.0007 | 0.0795 | 8.6456 | 0.5543 | 0.0768 | 7.4625 | 22 |
| 0.0008 | 0.0795 | 8.3246 | 0.5620 | 0.0768 | 7.4475 | 23 |
| 0.0012 | 0.0795 | 7.9451 | 0.5615 | 0.0768 | 7.0907 | 24 |
| 0.0025 | 0.0795 | 8.1065 | 0.5619 | 0.0768 | 7.7020 | 25 |
| 0.0011 | 0.0795 | 8.4237 | 0.5710 | 0.0768 | 7.4035 | 26 |
| 0.0009 | 0.0795 | 8.3074 | 0.5641 | 0.0768 | 7.1747 | 27 |
| 0.0007 | 0.0795 | 8.5183 | 0.5688 | 0.0768 | 7.4310 | 28 |
| 0.0014 | 0.0795 | 8.6604 | 0.5750 | 0.0767 | 8.0751 | 29 |
| 0.0022 | 0.0795 | 8.2353 | 0.5789 | 0.0767 | 7.4442 | 30 |
| 0.0019 | 0.0795 | 8.6037 | 0.5715 | 0.0767 | 7.6157 | 31 |
| 0.0009 | 0.0795 | 8.4768 | 0.5611 | 0.0769 | 7.6392 | 32 |
| 0.0005 | 0.0795 | 8.2728 | 0.5669 | 0.0768 | 7.1451 | 33 |
| 0.0010 | 0.0795 | 8.1006 | 0.5918 | 0.0766 | 7.4447 | 34 |
| 0.0036 | 0.0795 | 8.9171 | 0.5687 | 0.0767 | 7.6962 | 35 |
| 0.0018 | 0.0795 | 8.4062 | 0.5713 | 0.0768 | 7.2127 | 36 |
| 0.0012 | 0.0795 | 8.3370 | 0.5683 | 0.0768 | 7.1040 | 37 |
| 0.0005 | 0.0795 | 7.9931 | 0.5658 | 0.0769 | 6.8043 | 38 |
| 0.0002 | 0.0795 | 7.9500 | 0.5660 | 0.0769 | 7.0891 | 39 |
| 0.0001 | 0.0795 | 8.1912 | 0.5632 | 0.0770 | 7.1929 | 40 |
| 0.0001 | 0.0795 | 8.2484 | 0.5678 | 0.0769 | 7.6993 | 41 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e8_s108_v4_l4_v100
|
KingKazma
| 2023-08-13T19:40:55Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T19:40:54Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
bigmorning/whisper_charsplit_new_round2__0041
|
bigmorning
| 2023-08-13T19:38:44Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T19:38:36Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round2__0041
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round2__0041
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0001
- Train Accuracy: 0.0795
- Train Wermet: 8.1912
- Validation Loss: 0.5632
- Validation Accuracy: 0.0770
- Validation Wermet: 7.1929
- Epoch: 40
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0010 | 0.0795 | 8.7507 | 0.5575 | 0.0767 | 7.6778 | 0 |
| 0.0013 | 0.0795 | 8.9468 | 0.5652 | 0.0766 | 8.3360 | 1 |
| 0.0025 | 0.0795 | 8.7338 | 0.5673 | 0.0765 | 8.3770 | 2 |
| 0.0019 | 0.0795 | 8.9450 | 0.5623 | 0.0766 | 7.7117 | 3 |
| 0.0011 | 0.0795 | 8.9053 | 0.5609 | 0.0767 | 7.5155 | 4 |
| 0.0012 | 0.0795 | 8.8862 | 0.5667 | 0.0767 | 8.2913 | 5 |
| 0.0009 | 0.0795 | 8.7510 | 0.5642 | 0.0766 | 7.9083 | 6 |
| 0.0037 | 0.0795 | 9.3428 | 0.5717 | 0.0764 | 8.2631 | 7 |
| 0.0031 | 0.0795 | 9.2135 | 0.5636 | 0.0766 | 8.2384 | 8 |
| 0.0011 | 0.0795 | 8.9730 | 0.5605 | 0.0767 | 8.3958 | 9 |
| 0.0005 | 0.0795 | 9.3749 | 0.5552 | 0.0768 | 8.0800 | 10 |
| 0.0003 | 0.0795 | 9.3340 | 0.5584 | 0.0768 | 8.1322 | 11 |
| 0.0005 | 0.0795 | 9.2292 | 0.5687 | 0.0767 | 8.5576 | 12 |
| 0.0037 | 0.0795 | 9.2838 | 0.5751 | 0.0765 | 7.4189 | 13 |
| 0.0038 | 0.0795 | 8.7270 | 0.5605 | 0.0767 | 7.7098 | 14 |
| 0.0012 | 0.0795 | 8.8259 | 0.5563 | 0.0768 | 8.2647 | 15 |
| 0.0005 | 0.0795 | 9.0553 | 0.5620 | 0.0768 | 8.5020 | 16 |
| 0.0004 | 0.0795 | 9.1734 | 0.5607 | 0.0768 | 8.0252 | 17 |
| 0.0003 | 0.0795 | 9.0084 | 0.5571 | 0.0769 | 8.1563 | 18 |
| 0.0014 | 0.0795 | 8.7153 | 0.5804 | 0.0765 | 7.8654 | 19 |
| 0.0058 | 0.0794 | 8.8460 | 0.5706 | 0.0766 | 7.4342 | 20 |
| 0.0020 | 0.0795 | 8.6599 | 0.5612 | 0.0767 | 7.7369 | 21 |
| 0.0007 | 0.0795 | 8.6456 | 0.5543 | 0.0768 | 7.4625 | 22 |
| 0.0008 | 0.0795 | 8.3246 | 0.5620 | 0.0768 | 7.4475 | 23 |
| 0.0012 | 0.0795 | 7.9451 | 0.5615 | 0.0768 | 7.0907 | 24 |
| 0.0025 | 0.0795 | 8.1065 | 0.5619 | 0.0768 | 7.7020 | 25 |
| 0.0011 | 0.0795 | 8.4237 | 0.5710 | 0.0768 | 7.4035 | 26 |
| 0.0009 | 0.0795 | 8.3074 | 0.5641 | 0.0768 | 7.1747 | 27 |
| 0.0007 | 0.0795 | 8.5183 | 0.5688 | 0.0768 | 7.4310 | 28 |
| 0.0014 | 0.0795 | 8.6604 | 0.5750 | 0.0767 | 8.0751 | 29 |
| 0.0022 | 0.0795 | 8.2353 | 0.5789 | 0.0767 | 7.4442 | 30 |
| 0.0019 | 0.0795 | 8.6037 | 0.5715 | 0.0767 | 7.6157 | 31 |
| 0.0009 | 0.0795 | 8.4768 | 0.5611 | 0.0769 | 7.6392 | 32 |
| 0.0005 | 0.0795 | 8.2728 | 0.5669 | 0.0768 | 7.1451 | 33 |
| 0.0010 | 0.0795 | 8.1006 | 0.5918 | 0.0766 | 7.4447 | 34 |
| 0.0036 | 0.0795 | 8.9171 | 0.5687 | 0.0767 | 7.6962 | 35 |
| 0.0018 | 0.0795 | 8.4062 | 0.5713 | 0.0768 | 7.2127 | 36 |
| 0.0012 | 0.0795 | 8.3370 | 0.5683 | 0.0768 | 7.1040 | 37 |
| 0.0005 | 0.0795 | 7.9931 | 0.5658 | 0.0769 | 6.8043 | 38 |
| 0.0002 | 0.0795 | 7.9500 | 0.5660 | 0.0769 | 7.0891 | 39 |
| 0.0001 | 0.0795 | 8.1912 | 0.5632 | 0.0770 | 7.1929 | 40 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e0_s108_v4_l4_v100
|
KingKazma
| 2023-08-13T19:37:49Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T18:40:43Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
shreyasdatar/distilbert-base-uncased-finetuned-imdb
|
shreyasdatar
| 2023-08-13T19:35:06Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:tweet_eval",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-19T14:09:37Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- tweet_eval
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6538 | 1.0 | 149 | 3.3045 |
| 3.3379 | 2.0 | 298 | 3.1949 |
| 3.2875 | 3.0 | 447 | 3.1166 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e7_s108_v4_l5_v50
|
KingKazma
| 2023-08-13T19:33:08Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T19:33:06Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e5_s108_v4_l4_v100
|
KingKazma
| 2023-08-13T19:20:07Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T19:20:06Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
bigmorning/whisper_charsplit_new_round2__0036
|
bigmorning
| 2023-08-13T19:16:56Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T19:16:50Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round2__0036
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round2__0036
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0036
- Train Accuracy: 0.0795
- Train Wermet: 8.9171
- Validation Loss: 0.5687
- Validation Accuracy: 0.0767
- Validation Wermet: 7.6962
- Epoch: 35
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0010 | 0.0795 | 8.7507 | 0.5575 | 0.0767 | 7.6778 | 0 |
| 0.0013 | 0.0795 | 8.9468 | 0.5652 | 0.0766 | 8.3360 | 1 |
| 0.0025 | 0.0795 | 8.7338 | 0.5673 | 0.0765 | 8.3770 | 2 |
| 0.0019 | 0.0795 | 8.9450 | 0.5623 | 0.0766 | 7.7117 | 3 |
| 0.0011 | 0.0795 | 8.9053 | 0.5609 | 0.0767 | 7.5155 | 4 |
| 0.0012 | 0.0795 | 8.8862 | 0.5667 | 0.0767 | 8.2913 | 5 |
| 0.0009 | 0.0795 | 8.7510 | 0.5642 | 0.0766 | 7.9083 | 6 |
| 0.0037 | 0.0795 | 9.3428 | 0.5717 | 0.0764 | 8.2631 | 7 |
| 0.0031 | 0.0795 | 9.2135 | 0.5636 | 0.0766 | 8.2384 | 8 |
| 0.0011 | 0.0795 | 8.9730 | 0.5605 | 0.0767 | 8.3958 | 9 |
| 0.0005 | 0.0795 | 9.3749 | 0.5552 | 0.0768 | 8.0800 | 10 |
| 0.0003 | 0.0795 | 9.3340 | 0.5584 | 0.0768 | 8.1322 | 11 |
| 0.0005 | 0.0795 | 9.2292 | 0.5687 | 0.0767 | 8.5576 | 12 |
| 0.0037 | 0.0795 | 9.2838 | 0.5751 | 0.0765 | 7.4189 | 13 |
| 0.0038 | 0.0795 | 8.7270 | 0.5605 | 0.0767 | 7.7098 | 14 |
| 0.0012 | 0.0795 | 8.8259 | 0.5563 | 0.0768 | 8.2647 | 15 |
| 0.0005 | 0.0795 | 9.0553 | 0.5620 | 0.0768 | 8.5020 | 16 |
| 0.0004 | 0.0795 | 9.1734 | 0.5607 | 0.0768 | 8.0252 | 17 |
| 0.0003 | 0.0795 | 9.0084 | 0.5571 | 0.0769 | 8.1563 | 18 |
| 0.0014 | 0.0795 | 8.7153 | 0.5804 | 0.0765 | 7.8654 | 19 |
| 0.0058 | 0.0794 | 8.8460 | 0.5706 | 0.0766 | 7.4342 | 20 |
| 0.0020 | 0.0795 | 8.6599 | 0.5612 | 0.0767 | 7.7369 | 21 |
| 0.0007 | 0.0795 | 8.6456 | 0.5543 | 0.0768 | 7.4625 | 22 |
| 0.0008 | 0.0795 | 8.3246 | 0.5620 | 0.0768 | 7.4475 | 23 |
| 0.0012 | 0.0795 | 7.9451 | 0.5615 | 0.0768 | 7.0907 | 24 |
| 0.0025 | 0.0795 | 8.1065 | 0.5619 | 0.0768 | 7.7020 | 25 |
| 0.0011 | 0.0795 | 8.4237 | 0.5710 | 0.0768 | 7.4035 | 26 |
| 0.0009 | 0.0795 | 8.3074 | 0.5641 | 0.0768 | 7.1747 | 27 |
| 0.0007 | 0.0795 | 8.5183 | 0.5688 | 0.0768 | 7.4310 | 28 |
| 0.0014 | 0.0795 | 8.6604 | 0.5750 | 0.0767 | 8.0751 | 29 |
| 0.0022 | 0.0795 | 8.2353 | 0.5789 | 0.0767 | 7.4442 | 30 |
| 0.0019 | 0.0795 | 8.6037 | 0.5715 | 0.0767 | 7.6157 | 31 |
| 0.0009 | 0.0795 | 8.4768 | 0.5611 | 0.0769 | 7.6392 | 32 |
| 0.0005 | 0.0795 | 8.2728 | 0.5669 | 0.0768 | 7.1451 | 33 |
| 0.0010 | 0.0795 | 8.1006 | 0.5918 | 0.0766 | 7.4447 | 34 |
| 0.0036 | 0.0795 | 8.9171 | 0.5687 | 0.0767 | 7.6962 | 35 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_round2__0035
|
bigmorning
| 2023-08-13T19:12:33Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T19:12:25Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round2__0035
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round2__0035
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0010
- Train Accuracy: 0.0795
- Train Wermet: 8.1006
- Validation Loss: 0.5918
- Validation Accuracy: 0.0766
- Validation Wermet: 7.4447
- Epoch: 34
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0010 | 0.0795 | 8.7507 | 0.5575 | 0.0767 | 7.6778 | 0 |
| 0.0013 | 0.0795 | 8.9468 | 0.5652 | 0.0766 | 8.3360 | 1 |
| 0.0025 | 0.0795 | 8.7338 | 0.5673 | 0.0765 | 8.3770 | 2 |
| 0.0019 | 0.0795 | 8.9450 | 0.5623 | 0.0766 | 7.7117 | 3 |
| 0.0011 | 0.0795 | 8.9053 | 0.5609 | 0.0767 | 7.5155 | 4 |
| 0.0012 | 0.0795 | 8.8862 | 0.5667 | 0.0767 | 8.2913 | 5 |
| 0.0009 | 0.0795 | 8.7510 | 0.5642 | 0.0766 | 7.9083 | 6 |
| 0.0037 | 0.0795 | 9.3428 | 0.5717 | 0.0764 | 8.2631 | 7 |
| 0.0031 | 0.0795 | 9.2135 | 0.5636 | 0.0766 | 8.2384 | 8 |
| 0.0011 | 0.0795 | 8.9730 | 0.5605 | 0.0767 | 8.3958 | 9 |
| 0.0005 | 0.0795 | 9.3749 | 0.5552 | 0.0768 | 8.0800 | 10 |
| 0.0003 | 0.0795 | 9.3340 | 0.5584 | 0.0768 | 8.1322 | 11 |
| 0.0005 | 0.0795 | 9.2292 | 0.5687 | 0.0767 | 8.5576 | 12 |
| 0.0037 | 0.0795 | 9.2838 | 0.5751 | 0.0765 | 7.4189 | 13 |
| 0.0038 | 0.0795 | 8.7270 | 0.5605 | 0.0767 | 7.7098 | 14 |
| 0.0012 | 0.0795 | 8.8259 | 0.5563 | 0.0768 | 8.2647 | 15 |
| 0.0005 | 0.0795 | 9.0553 | 0.5620 | 0.0768 | 8.5020 | 16 |
| 0.0004 | 0.0795 | 9.1734 | 0.5607 | 0.0768 | 8.0252 | 17 |
| 0.0003 | 0.0795 | 9.0084 | 0.5571 | 0.0769 | 8.1563 | 18 |
| 0.0014 | 0.0795 | 8.7153 | 0.5804 | 0.0765 | 7.8654 | 19 |
| 0.0058 | 0.0794 | 8.8460 | 0.5706 | 0.0766 | 7.4342 | 20 |
| 0.0020 | 0.0795 | 8.6599 | 0.5612 | 0.0767 | 7.7369 | 21 |
| 0.0007 | 0.0795 | 8.6456 | 0.5543 | 0.0768 | 7.4625 | 22 |
| 0.0008 | 0.0795 | 8.3246 | 0.5620 | 0.0768 | 7.4475 | 23 |
| 0.0012 | 0.0795 | 7.9451 | 0.5615 | 0.0768 | 7.0907 | 24 |
| 0.0025 | 0.0795 | 8.1065 | 0.5619 | 0.0768 | 7.7020 | 25 |
| 0.0011 | 0.0795 | 8.4237 | 0.5710 | 0.0768 | 7.4035 | 26 |
| 0.0009 | 0.0795 | 8.3074 | 0.5641 | 0.0768 | 7.1747 | 27 |
| 0.0007 | 0.0795 | 8.5183 | 0.5688 | 0.0768 | 7.4310 | 28 |
| 0.0014 | 0.0795 | 8.6604 | 0.5750 | 0.0767 | 8.0751 | 29 |
| 0.0022 | 0.0795 | 8.2353 | 0.5789 | 0.0767 | 7.4442 | 30 |
| 0.0019 | 0.0795 | 8.6037 | 0.5715 | 0.0767 | 7.6157 | 31 |
| 0.0009 | 0.0795 | 8.4768 | 0.5611 | 0.0769 | 7.6392 | 32 |
| 0.0005 | 0.0795 | 8.2728 | 0.5669 | 0.0768 | 7.1451 | 33 |
| 0.0010 | 0.0795 | 8.1006 | 0.5918 | 0.0766 | 7.4447 | 34 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_8_e2_s108_v4_l4_v100
|
KingKazma
| 2023-08-13T19:11:24Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T19:11:23Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e4_s108_v4_l5_v50
|
KingKazma
| 2023-08-13T19:11:09Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T19:11:07Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
bigmorning/whisper_charsplit_new_round2__0033
|
bigmorning
| 2023-08-13T19:03:48Z | 55 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T19:03:41Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round2__0033
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round2__0033
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0009
- Train Accuracy: 0.0795
- Train Wermet: 8.4768
- Validation Loss: 0.5611
- Validation Accuracy: 0.0769
- Validation Wermet: 7.6392
- Epoch: 32
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0010 | 0.0795 | 8.7507 | 0.5575 | 0.0767 | 7.6778 | 0 |
| 0.0013 | 0.0795 | 8.9468 | 0.5652 | 0.0766 | 8.3360 | 1 |
| 0.0025 | 0.0795 | 8.7338 | 0.5673 | 0.0765 | 8.3770 | 2 |
| 0.0019 | 0.0795 | 8.9450 | 0.5623 | 0.0766 | 7.7117 | 3 |
| 0.0011 | 0.0795 | 8.9053 | 0.5609 | 0.0767 | 7.5155 | 4 |
| 0.0012 | 0.0795 | 8.8862 | 0.5667 | 0.0767 | 8.2913 | 5 |
| 0.0009 | 0.0795 | 8.7510 | 0.5642 | 0.0766 | 7.9083 | 6 |
| 0.0037 | 0.0795 | 9.3428 | 0.5717 | 0.0764 | 8.2631 | 7 |
| 0.0031 | 0.0795 | 9.2135 | 0.5636 | 0.0766 | 8.2384 | 8 |
| 0.0011 | 0.0795 | 8.9730 | 0.5605 | 0.0767 | 8.3958 | 9 |
| 0.0005 | 0.0795 | 9.3749 | 0.5552 | 0.0768 | 8.0800 | 10 |
| 0.0003 | 0.0795 | 9.3340 | 0.5584 | 0.0768 | 8.1322 | 11 |
| 0.0005 | 0.0795 | 9.2292 | 0.5687 | 0.0767 | 8.5576 | 12 |
| 0.0037 | 0.0795 | 9.2838 | 0.5751 | 0.0765 | 7.4189 | 13 |
| 0.0038 | 0.0795 | 8.7270 | 0.5605 | 0.0767 | 7.7098 | 14 |
| 0.0012 | 0.0795 | 8.8259 | 0.5563 | 0.0768 | 8.2647 | 15 |
| 0.0005 | 0.0795 | 9.0553 | 0.5620 | 0.0768 | 8.5020 | 16 |
| 0.0004 | 0.0795 | 9.1734 | 0.5607 | 0.0768 | 8.0252 | 17 |
| 0.0003 | 0.0795 | 9.0084 | 0.5571 | 0.0769 | 8.1563 | 18 |
| 0.0014 | 0.0795 | 8.7153 | 0.5804 | 0.0765 | 7.8654 | 19 |
| 0.0058 | 0.0794 | 8.8460 | 0.5706 | 0.0766 | 7.4342 | 20 |
| 0.0020 | 0.0795 | 8.6599 | 0.5612 | 0.0767 | 7.7369 | 21 |
| 0.0007 | 0.0795 | 8.6456 | 0.5543 | 0.0768 | 7.4625 | 22 |
| 0.0008 | 0.0795 | 8.3246 | 0.5620 | 0.0768 | 7.4475 | 23 |
| 0.0012 | 0.0795 | 7.9451 | 0.5615 | 0.0768 | 7.0907 | 24 |
| 0.0025 | 0.0795 | 8.1065 | 0.5619 | 0.0768 | 7.7020 | 25 |
| 0.0011 | 0.0795 | 8.4237 | 0.5710 | 0.0768 | 7.4035 | 26 |
| 0.0009 | 0.0795 | 8.3074 | 0.5641 | 0.0768 | 7.1747 | 27 |
| 0.0007 | 0.0795 | 8.5183 | 0.5688 | 0.0768 | 7.4310 | 28 |
| 0.0014 | 0.0795 | 8.6604 | 0.5750 | 0.0767 | 8.0751 | 29 |
| 0.0022 | 0.0795 | 8.2353 | 0.5789 | 0.0767 | 7.4442 | 30 |
| 0.0019 | 0.0795 | 8.6037 | 0.5715 | 0.0767 | 7.6157 | 31 |
| 0.0009 | 0.0795 | 8.4768 | 0.5611 | 0.0769 | 7.6392 | 32 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_8_e1_s108_v4_l4_v100
|
KingKazma
| 2023-08-13T19:02:35Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T15:55:00Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
aongwachi/amity-09092023
|
aongwachi
| 2023-08-13T19:00:52Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T18:58:15Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
bigmorning/whisper_charsplit_new_round2__0032
|
bigmorning
| 2023-08-13T18:59:27Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T18:59:19Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round2__0032
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round2__0032
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0019
- Train Accuracy: 0.0795
- Train Wermet: 8.6037
- Validation Loss: 0.5715
- Validation Accuracy: 0.0767
- Validation Wermet: 7.6157
- Epoch: 31
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0010 | 0.0795 | 8.7507 | 0.5575 | 0.0767 | 7.6778 | 0 |
| 0.0013 | 0.0795 | 8.9468 | 0.5652 | 0.0766 | 8.3360 | 1 |
| 0.0025 | 0.0795 | 8.7338 | 0.5673 | 0.0765 | 8.3770 | 2 |
| 0.0019 | 0.0795 | 8.9450 | 0.5623 | 0.0766 | 7.7117 | 3 |
| 0.0011 | 0.0795 | 8.9053 | 0.5609 | 0.0767 | 7.5155 | 4 |
| 0.0012 | 0.0795 | 8.8862 | 0.5667 | 0.0767 | 8.2913 | 5 |
| 0.0009 | 0.0795 | 8.7510 | 0.5642 | 0.0766 | 7.9083 | 6 |
| 0.0037 | 0.0795 | 9.3428 | 0.5717 | 0.0764 | 8.2631 | 7 |
| 0.0031 | 0.0795 | 9.2135 | 0.5636 | 0.0766 | 8.2384 | 8 |
| 0.0011 | 0.0795 | 8.9730 | 0.5605 | 0.0767 | 8.3958 | 9 |
| 0.0005 | 0.0795 | 9.3749 | 0.5552 | 0.0768 | 8.0800 | 10 |
| 0.0003 | 0.0795 | 9.3340 | 0.5584 | 0.0768 | 8.1322 | 11 |
| 0.0005 | 0.0795 | 9.2292 | 0.5687 | 0.0767 | 8.5576 | 12 |
| 0.0037 | 0.0795 | 9.2838 | 0.5751 | 0.0765 | 7.4189 | 13 |
| 0.0038 | 0.0795 | 8.7270 | 0.5605 | 0.0767 | 7.7098 | 14 |
| 0.0012 | 0.0795 | 8.8259 | 0.5563 | 0.0768 | 8.2647 | 15 |
| 0.0005 | 0.0795 | 9.0553 | 0.5620 | 0.0768 | 8.5020 | 16 |
| 0.0004 | 0.0795 | 9.1734 | 0.5607 | 0.0768 | 8.0252 | 17 |
| 0.0003 | 0.0795 | 9.0084 | 0.5571 | 0.0769 | 8.1563 | 18 |
| 0.0014 | 0.0795 | 8.7153 | 0.5804 | 0.0765 | 7.8654 | 19 |
| 0.0058 | 0.0794 | 8.8460 | 0.5706 | 0.0766 | 7.4342 | 20 |
| 0.0020 | 0.0795 | 8.6599 | 0.5612 | 0.0767 | 7.7369 | 21 |
| 0.0007 | 0.0795 | 8.6456 | 0.5543 | 0.0768 | 7.4625 | 22 |
| 0.0008 | 0.0795 | 8.3246 | 0.5620 | 0.0768 | 7.4475 | 23 |
| 0.0012 | 0.0795 | 7.9451 | 0.5615 | 0.0768 | 7.0907 | 24 |
| 0.0025 | 0.0795 | 8.1065 | 0.5619 | 0.0768 | 7.7020 | 25 |
| 0.0011 | 0.0795 | 8.4237 | 0.5710 | 0.0768 | 7.4035 | 26 |
| 0.0009 | 0.0795 | 8.3074 | 0.5641 | 0.0768 | 7.1747 | 27 |
| 0.0007 | 0.0795 | 8.5183 | 0.5688 | 0.0768 | 7.4310 | 28 |
| 0.0014 | 0.0795 | 8.6604 | 0.5750 | 0.0767 | 8.0751 | 29 |
| 0.0022 | 0.0795 | 8.2353 | 0.5789 | 0.0767 | 7.4442 | 30 |
| 0.0019 | 0.0795 | 8.6037 | 0.5715 | 0.0767 | 7.6157 | 31 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e2_s108_v4_l4_v100
|
KingKazma
| 2023-08-13T18:59:21Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T18:20:40Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
sherif1311/flan-t5-base-classification_int1
|
sherif1311
| 2023-08-13T18:55:37Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-13T18:50:58Z |
---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: flan-t5-base-classification_int1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-classification_int1
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0036
- F1: 99.7778
- Gen Len: 2.3333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 1.12.1+cu116
- Datasets 2.14.4
- Tokenizers 0.12.1
|
bigmorning/whisper_charsplit_new_round2__0031
|
bigmorning
| 2023-08-13T18:55:01Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T18:54:54Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round2__0031
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round2__0031
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0022
- Train Accuracy: 0.0795
- Train Wermet: 8.2353
- Validation Loss: 0.5789
- Validation Accuracy: 0.0767
- Validation Wermet: 7.4442
- Epoch: 30
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0010 | 0.0795 | 8.7507 | 0.5575 | 0.0767 | 7.6778 | 0 |
| 0.0013 | 0.0795 | 8.9468 | 0.5652 | 0.0766 | 8.3360 | 1 |
| 0.0025 | 0.0795 | 8.7338 | 0.5673 | 0.0765 | 8.3770 | 2 |
| 0.0019 | 0.0795 | 8.9450 | 0.5623 | 0.0766 | 7.7117 | 3 |
| 0.0011 | 0.0795 | 8.9053 | 0.5609 | 0.0767 | 7.5155 | 4 |
| 0.0012 | 0.0795 | 8.8862 | 0.5667 | 0.0767 | 8.2913 | 5 |
| 0.0009 | 0.0795 | 8.7510 | 0.5642 | 0.0766 | 7.9083 | 6 |
| 0.0037 | 0.0795 | 9.3428 | 0.5717 | 0.0764 | 8.2631 | 7 |
| 0.0031 | 0.0795 | 9.2135 | 0.5636 | 0.0766 | 8.2384 | 8 |
| 0.0011 | 0.0795 | 8.9730 | 0.5605 | 0.0767 | 8.3958 | 9 |
| 0.0005 | 0.0795 | 9.3749 | 0.5552 | 0.0768 | 8.0800 | 10 |
| 0.0003 | 0.0795 | 9.3340 | 0.5584 | 0.0768 | 8.1322 | 11 |
| 0.0005 | 0.0795 | 9.2292 | 0.5687 | 0.0767 | 8.5576 | 12 |
| 0.0037 | 0.0795 | 9.2838 | 0.5751 | 0.0765 | 7.4189 | 13 |
| 0.0038 | 0.0795 | 8.7270 | 0.5605 | 0.0767 | 7.7098 | 14 |
| 0.0012 | 0.0795 | 8.8259 | 0.5563 | 0.0768 | 8.2647 | 15 |
| 0.0005 | 0.0795 | 9.0553 | 0.5620 | 0.0768 | 8.5020 | 16 |
| 0.0004 | 0.0795 | 9.1734 | 0.5607 | 0.0768 | 8.0252 | 17 |
| 0.0003 | 0.0795 | 9.0084 | 0.5571 | 0.0769 | 8.1563 | 18 |
| 0.0014 | 0.0795 | 8.7153 | 0.5804 | 0.0765 | 7.8654 | 19 |
| 0.0058 | 0.0794 | 8.8460 | 0.5706 | 0.0766 | 7.4342 | 20 |
| 0.0020 | 0.0795 | 8.6599 | 0.5612 | 0.0767 | 7.7369 | 21 |
| 0.0007 | 0.0795 | 8.6456 | 0.5543 | 0.0768 | 7.4625 | 22 |
| 0.0008 | 0.0795 | 8.3246 | 0.5620 | 0.0768 | 7.4475 | 23 |
| 0.0012 | 0.0795 | 7.9451 | 0.5615 | 0.0768 | 7.0907 | 24 |
| 0.0025 | 0.0795 | 8.1065 | 0.5619 | 0.0768 | 7.7020 | 25 |
| 0.0011 | 0.0795 | 8.4237 | 0.5710 | 0.0768 | 7.4035 | 26 |
| 0.0009 | 0.0795 | 8.3074 | 0.5641 | 0.0768 | 7.1747 | 27 |
| 0.0007 | 0.0795 | 8.5183 | 0.5688 | 0.0768 | 7.4310 | 28 |
| 0.0014 | 0.0795 | 8.6604 | 0.5750 | 0.0767 | 8.0751 | 29 |
| 0.0022 | 0.0795 | 8.2353 | 0.5789 | 0.0767 | 7.4442 | 30 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e1_s108_v4_l4_v100
|
KingKazma
| 2023-08-13T18:52:26Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T18:13:23Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
mark-oppenheim/Reinforce-Pixelcopter-PLE-v0
|
mark-oppenheim
| 2023-08-13T18:51:41Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-13T18:47:26Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 61.20 +/- 42.26
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e1_s108_v4_l5_v50
|
KingKazma
| 2023-08-13T18:49:12Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T17:50:45Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.