modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 00:42:47
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 553
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 00:42:38
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Philophilae/xlm-roberta-base-finetuned-panx-de-fr
|
Philophilae
| 2023-08-24T09:04:40Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-24T08:52:33Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1592
- F1: 0.8533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 358 | 0.1775 | 0.8293 |
| 0.2368 | 2.0 | 716 | 0.1624 | 0.8403 |
| 0.2368 | 3.0 | 1074 | 0.1592 | 0.8533 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.0
- Datasets 1.16.1
- Tokenizers 0.13.3
|
JessicaHsu/a2c-PandaReachDense-v2
|
JessicaHsu
| 2023-08-24T08:59:46Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-08T08:12:49Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.44 +/- 0.75
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
|
Jasper881108/chatglm-rm-lora-delta
|
Jasper881108
| 2023-08-24T08:57:55Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-24T08:57:53Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
EmirhanExecute/ppo-LunarLander-try2
|
EmirhanExecute
| 2023-08-24T08:56:49Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-24T08:56:29Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.15 +/- 15.93
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nomsgadded/Translation
|
nomsgadded
| 2023-08-24T08:52:26Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"en",
"fr",
"dataset:opus_books",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-24T08:13:12Z |
---
language:
- en
- fr
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- opus_books
model-index:
- name: Translation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Translation
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_books en-fr dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
922-CA/negev-gfl-rvc2-tests
|
922-CA
| 2023-08-24T08:51:21Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-08-22T08:46:16Z |
---
license: openrail
---
Test RVC2 models on the GFL character Negev, via various hyperparams and datasets.
# negev-test-0 (~07/2023)
* Trained on dataset of ~30 items, dialogue from game
* Trained for ~100 epochs
* First attempt
# negev-test-1 - nne1_e10_s150 (08/22/2023)
* Trained on dataset of ~30 items, dialogue from game
* Trained for 10 epochs (150 steps)
* Less artifacting but with accent
# negev-test-1 - nne1_e60_s900 (08/22/2023)
* Trained on dataset of ~30 items, dialogue from game
* Trained for 60 epochs (900 steps)
* Tends to be clearer and with less accent
|
malanevans/dqn-SpaceInvadersNoFrameskip-v4
|
malanevans
| 2023-08-24T08:48:50Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-24T08:48:15Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 615.50 +/- 113.10
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga malanevans -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga malanevans -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga malanevans
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ahmedtremo/image-gen-v2
|
ahmedtremo
| 2023-08-24T08:38:47Z | 2 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-08-22T13:08:29Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of GenNext logo
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
amazon/sm-hackathon-actionability-9-multi-outputs-setfit-all-distilroberta-model-v0.1
|
amazon
| 2023-08-24T08:36:30Z | 187 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-08-24T08:36:24Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# amazon/sm-hackathon-actionability-9-multi-outputs-setfit-all-distilroberta-model-v0.1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("amazon/sm-hackathon-actionability-9-multi-outputs-setfit-all-distilroberta-model-v0.1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
bigmorning/train_from_raw_cv12__0015
|
bigmorning
| 2023-08-24T08:33:31Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-24T08:33:23Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: train_from_raw_cv12__0015
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# train_from_raw_cv12__0015
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: nan
- Train Accuracy: 0.0032
- Train Wermet: 8.3902
- Validation Loss: nan
- Validation Accuracy: 0.0032
- Validation Wermet: 8.3902
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| nan | 0.0032 | 8.3778 | nan | 0.0032 | 8.3902 | 0 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 1 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 2 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 3 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 4 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 5 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 6 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 7 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 8 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 9 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 10 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 11 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 12 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 13 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 14 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
RajuEEE/RewardModelForQuestionAnswering_GPT2_Classify
|
RajuEEE
| 2023-08-24T08:28:13Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-24T08:28:11Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
lordhiew/myfirsttrain
|
lordhiew
| 2023-08-24T08:25:44Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-28T07:25:03Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
bigmorning/train_from_raw_cv12__0010
|
bigmorning
| 2023-08-24T08:12:54Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-24T08:12:46Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: train_from_raw_cv12__0010
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# train_from_raw_cv12__0010
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: nan
- Train Accuracy: 0.0032
- Train Wermet: 8.3902
- Validation Loss: nan
- Validation Accuracy: 0.0032
- Validation Wermet: 8.3902
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| nan | 0.0032 | 8.3778 | nan | 0.0032 | 8.3902 | 0 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 1 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 2 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 3 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 4 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 5 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 6 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 7 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 8 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 9 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
raygx/distilGPT-NepSA
|
raygx
| 2023-08-24T08:12:30Z | 71 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-13T04:59:50Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilGPT-NepSA
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilGPT-NepSA
This model is a fine-tuned version of [raygx/distilGPT-Nepali](https://huggingface.co/raygx/distilGPT-Nepali) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6068
- Validation Loss: 0.6592
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.04}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.8415 | 0.7254 | 0 |
| 0.6068 | 0.6592 | 1 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.11.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
amazon/sm-hackathon-actionability-9-multi-outputs-setfit-all-roberta-large-model-v0.1
|
amazon
| 2023-08-24T08:10:03Z | 27 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-08-24T08:09:30Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# amazon/sm-hackathon-actionability-9-multi-outputs-setfit-all-roberta-large-model-v0.1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("amazon/sm-hackathon-actionability-9-multi-outputs-setfit-all-roberta-large-model-v0.1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
aware-ai/wav2vec2-base-german
|
aware-ai
| 2023-08-24T08:01:53Z | 104 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_10_0",
"generated_from_trainer",
"de",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-09-01T19:46:01Z |
---
language:
- de
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_10_0
- generated_from_trainer
model-index:
- name: wav2vec2-base-german
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-german
This model is a fine-tuned version of [wav2vec2-base-german](https://huggingface.co/wav2vec2-base-german) on the MOZILLA-FOUNDATION/COMMON_VOICE_10_0 - DE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9302
- Wer: 0.7428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8427 | 1.0 | 451 | 1.0878 | 0.8091 |
| 0.722 | 2.0 | 902 | 0.9732 | 0.7593 |
| 0.6589 | 3.0 | 1353 | 0.9302 | 0.7428 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
juandalibaba/my_awesome_wnut_model
|
juandalibaba
| 2023-08-24T07:56:48Z | 65 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-23T06:40:28Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: juandalibaba/my_awesome_wnut_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# juandalibaba/my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6376
- Validation Loss: 1.8223
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.7876 | 1.9931 | 0 |
| 1.7614 | 1.8223 | 1 |
| 1.6376 | 1.8223 | 2 |
### Framework versions
- Transformers 4.32.0
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
bigmorning/train_from_raw_cv12__0005
|
bigmorning
| 2023-08-24T07:52:22Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-24T07:52:14Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: train_from_raw_cv12__0005
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# train_from_raw_cv12__0005
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: nan
- Train Accuracy: 0.0032
- Train Wermet: 8.3902
- Validation Loss: nan
- Validation Accuracy: 0.0032
- Validation Wermet: 8.3902
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| nan | 0.0032 | 8.3778 | nan | 0.0032 | 8.3902 | 0 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 1 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 2 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 3 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 4 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
ardt-multipart/ardt-multipart-arrl_sgld_train_walker2d_high-2408_0757-99
|
ardt-multipart
| 2023-08-24T07:52:01Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-24T06:58:51Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-multipart-arrl_sgld_train_walker2d_high-2408_0757-99
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-multipart-arrl_sgld_train_walker2d_high-2408_0757-99
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
amazon/sm-hackathon-actionability-9-multi-outputs-setfit-model-v0.1
|
amazon
| 2023-08-24T07:48:06Z | 24 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-08-24T07:28:22Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# amazon/sm-hackathon-actionability-9-multi-outputs-setfit-model-v0.1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("amazon/sm-hackathon-actionability-9-multi-outputs-setfit-model-v0.1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Hamzaabbas77/distilbert-base-uncased-finetuned-sst2
|
Hamzaabbas77
| 2023-08-24T07:44:14Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-24T07:15:12Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Hamzaabbas77/distilbert-base-uncased-finetuned-sst2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hamzaabbas77/distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6840
- Validation Loss: 0.6827
- Train Accuracy: 0.5450
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 324, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6840 | 0.6827 | 0.5450 | 0 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.13.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
sunil18p31a0101/dqn-SpaceInvadersNoFrameskip-v4
|
sunil18p31a0101
| 2023-08-24T07:41:41Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-24T06:12:12Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 402.50 +/- 168.53
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sunil18p31a0101 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sunil18p31a0101 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga sunil18p31a0101
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
avasaz/avasaz-large
|
avasaz
| 2023-08-24T07:30:53Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"musicgen",
"text-to-audio",
"license:mit",
"region:us"
] |
text-to-audio
| 2023-08-23T19:46:30Z |
---
inference: false
tags:
- musicgen
license: mit
---
# Avasaz Large (3.3B) - Make music directly from your ideas
<p align="center">
<img src="https://huggingface.co/avasaz/avasaz-large/resolve/main/avasaz_logo.png" width=256 height=256 />
</p>
## What is Avasaz?
Avasaz (which is a combinations of Persian word آوا meaning song and ساز meaning maker, literally translates to _song maker_) is a _state-of-the-art generative AI model_ which can help you turn your ideas to music in matter of a few minutes. This model has been developed by [Muhammadreza Haghiri](https://haghiri75.com/en) as an effort to make a suite of AI programs to make the world a much better place for our future generations.
## How can you use Avasaz?
[](https://colab.research.google.com/github/prp-e/avasaz/blob/main/Avasaz_Inference.ipynb)
Currently, Infrerence is only available on _Colab_. Codes will be here as soon as possible.
|
nishant-glance/path-to-save-model-2-1-priorp
|
nishant-glance
| 2023-08-24T07:09:25Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:finetune:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-24T06:20:37Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - nishant-glance/path-to-save-model-2-1-priorp
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
achmaddaa/ametv2
|
achmaddaa
| 2023-08-24T07:07:14Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-24T07:04:20Z |
---
license: creativeml-openrail-m
---
|
DineshK/dummy-model
|
DineshK
| 2023-08-24T07:05:34Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"camembert",
"fill-mask",
"generated_from_keras_callback",
"base_model:almanach/camembert-base",
"base_model:finetune:almanach/camembert-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-08-24T07:03:17Z |
---
license: mit
base_model: camembert-base
tags:
- generated_from_keras_callback
model-index:
- name: dummy-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dummy-model
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.32.0
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
openchat/opencoderplus
|
openchat
| 2023-08-24T07:01:34Z | 1,487 | 103 |
transformers
|
[
"transformers",
"pytorch",
"gpt_bigcode",
"text-generation",
"llama",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-30T15:28:09Z |
---
language:
- en
tags:
- llama
---
# OpenChat: Less is More for Open-source Models
OpenChat is a series of open-source language models fine-tuned on a diverse and high-quality dataset of multi-round conversations. With only ~6K GPT-4 conversations filtered from the ~90K ShareGPT conversations, OpenChat is designed to achieve high performance with limited data.
**Generic models:**
- OpenChat: based on LLaMA-13B (2048 context length)
- **🚀 105.7%** of ChatGPT score on Vicuna GPT-4 evaluation
- **🔥 80.9%** Win-rate on AlpacaEval
- **🤗 Only used 6K data for finetuning!!!**
- OpenChat-8192: based on LLaMA-13B (extended to 8192 context length)
- **106.6%** of ChatGPT score on Vicuna GPT-4 evaluation
- **79.5%** of ChatGPT score on Vicuna GPT-4 evaluation
**Code models:**
- OpenCoderPlus: based on StarCoderPlus (native 8192 context length)
- **102.5%** of ChatGPT score on Vicuna GPT-4 evaluation
- **78.7%** Win-rate on AlpacaEval
*Note:* Please load the pretrained models using *bfloat16*
## Code and Inference Server
We provide the full source code, including an inference server compatible with the "ChatCompletions" API, in the [OpenChat](https://github.com/imoneoi/openchat) GitHub repository.
## Web UI
OpenChat also includes a web UI for a better user experience. See the GitHub repository for instructions.
## Conversation Template
The conversation template **involves concatenating tokens**.
Besides base model vocabulary, an end-of-turn token `<|end_of_turn|>` is added, with id `eot_token_id`.
```python
# OpenChat
[bos_token_id] + tokenize("Human: ") + tokenize(user_question) + [eot_token_id] + tokenize("Assistant: ")
# OpenCoder
tokenize("User:") + tokenize(user_question) + [eot_token_id] + tokenize("Assistant:")
```
*Hint: In BPE, `tokenize(A) + tokenize(B)` does not always equals to `tokenize(A + B)`*
Following is the code for generating the conversation templates:
```python
@dataclass
class ModelConfig:
# Prompt
system: Optional[str]
role_prefix: dict
ai_role: str
eot_token: str
bos_token: Optional[str] = None
# Get template
def generate_conversation_template(self, tokenize_fn, tokenize_special_fn, message_list):
tokens = []
masks = []
# begin of sentence (bos)
if self.bos_token:
t = tokenize_special_fn(self.bos_token)
tokens.append(t)
masks.append(False)
# System
if self.system:
t = tokenize_fn(self.system) + [tokenize_special_fn(self.eot_token)]
tokens.extend(t)
masks.extend([False] * len(t))
# Messages
for idx, message in enumerate(message_list):
# Prefix
t = tokenize_fn(self.role_prefix[message["from"]])
tokens.extend(t)
masks.extend([False] * len(t))
# Message
if "value" in message:
t = tokenize_fn(message["value"]) + [tokenize_special_fn(self.eot_token)]
tokens.extend(t)
masks.extend([message["from"] == self.ai_role] * len(t))
else:
assert idx == len(message_list) - 1, "Empty message for completion must be on the last."
return tokens, masks
MODEL_CONFIG_MAP = {
# OpenChat / OpenChat-8192
"openchat": ModelConfig(
# Prompt
system=None,
role_prefix={
"human": "Human: ",
"gpt": "Assistant: "
},
ai_role="gpt",
eot_token="<|end_of_turn|>",
bos_token="<s>",
),
# OpenCoder / OpenCoderPlus
"opencoder": ModelConfig(
# Prompt
system=None,
role_prefix={
"human": "User:",
"gpt": "Assistant:"
},
ai_role="gpt",
eot_token="<|end_of_turn|>",
bos_token=None,
)
}
```
|
greenyslimerfahrungen/greenyslimerfahrungen
|
greenyslimerfahrungen
| 2023-08-24T06:45:50Z | 0 | 0 |
espnet
|
[
"espnet",
"Greeny Slim Erfahrungen",
"en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-08-24T06:45:09Z |
---
license: cc-by-nc-sa-4.0
language:
- en
library_name: espnet
tags:
- Greeny Slim Erfahrungen
---
[Greeny Slim Erfahrungen](https://supplementtycoon.com/de/greeny-slim-fruchtgummis/) Notwithstanding, it's vital to take note of that despite the fact that they are low in carbs and sugar, they ought to in any case be consumed with some restraint as a feature of a fair diet.As forever, it's prescribed to peruse the nourishment marks and fixings list cautiously prior to buying any keto gummies to guarantee they line up with your dietary objectives and inclinations.
VISIT HERE FOR OFFICIAL WEBSITE:-https://supplementtycoon.com/de/greeny-slim-fruchtgummis/
|
Hanpt/sentence-transformer-ja-triplet
|
Hanpt
| 2023-08-24T06:42:54Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-08-24T06:42:48Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Hanpt/sentence-transformer-ja-triplet
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Hanpt/sentence-transformer-ja-triplet')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Hanpt/sentence-transformer-ja-triplet')
model = AutoModel.from_pretrained('Hanpt/sentence-transformer-ja-triplet')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Hanpt/sentence-transformer-ja-triplet)
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 432 with parameters:
```
{'batch_size': 8}
```
**Loss**:
`sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters:
```
{'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 432,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
dkimds/a2c-PandaReachDense-v3
|
dkimds
| 2023-08-24T06:17:56Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-24T06:12:25Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.18 +/- 0.12
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
HGV1408/Data
|
HGV1408
| 2023-08-24T06:17:51Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-24T06:15:20Z |
---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6997 | 0.54 | 500 | 1.4834 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
SHENMU007/neunit_BASE_V9.5.1.1
|
SHENMU007
| 2023-08-24T06:10:01Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-08-24T01:38:49Z |
---
language:
- zh
license: mit
base_model: microsoft/speecht5_tts
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Stepa/ddpm-celebahq-finetuned-butterflies-2epochs
|
Stepa
| 2023-08-24T06:08:25Z | 46 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-08-24T06:08:06Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Describe your model here
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Stepa/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
```
|
ardt-multipart/ardt-multipart-arrl_sgld_train_walker2d_high-2408_0605-33
|
ardt-multipart
| 2023-08-24T06:01:05Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-24T05:06:49Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-multipart-arrl_sgld_train_walker2d_high-2408_0605-33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-multipart-arrl_sgld_train_walker2d_high-2408_0605-33
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Afbnff/B
|
Afbnff
| 2023-08-24T05:29:13Z | 0 | 0 | null |
[
"dataset:fka/awesome-chatgpt-prompts",
"region:us"
] | null | 2023-08-24T05:28:01Z |
---
datasets:
- fka/awesome-chatgpt-prompts
metrics:
- accuracy
---
|
tanguyrenaudie/pokemon-lora
|
tanguyrenaudie
| 2023-08-24T05:21:45Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-23T03:05:35Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - tanguyrenaudie/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
ardt-multipart/ardt-multipart-arrl_train_walker2d_high-2408_0434-99
|
ardt-multipart
| 2023-08-24T05:04:48Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-24T03:36:33Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-multipart-arrl_train_walker2d_high-2408_0434-99
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-multipart-arrl_train_walker2d_high-2408_0434-99
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
neil-code/autotrain-test-summarization-84415142559
|
neil-code
| 2023-08-24T04:28:12Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain",
"summarization",
"en",
"dataset:neil-code/autotrain-data-test-summarization",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-08-24T04:23:26Z |
---
tags:
- autotrain
- summarization
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- neil-code/autotrain-data-test-summarization
co2_eq_emissions:
emissions: 3.0878646296058494
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 84415142559
- CO2 Emissions (in grams): 3.0879
## Validation Metrics
- Loss: 1.534
- Rouge1: 33.336
- Rouge2: 11.361
- RougeL: 27.779
- RougeLsum: 29.966
- Gen Len: 18.773
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/neil-code/autotrain-test-summarization-84415142559
```
|
larryvrh/tigerbot-13b-chat-sharegpt-lora
|
larryvrh
| 2023-08-24T04:27:43Z | 0 | 1 | null |
[
"text-generation",
"zh",
"dataset:larryvrh/sharegpt_zh-only",
"region:us"
] |
text-generation
| 2023-08-24T02:22:02Z |
---
datasets:
- larryvrh/sharegpt_zh-only
language:
- zh
pipeline_tag: text-generation
---
使用8631条中文sharegpt语料[larryvrh/sharegpt_zh-only](https://huggingface.co/datasets/larryvrh/sharegpt_zh-only)重新对齐后的[TigerResearch/tigerbot-13b-chat](https://huggingface.co/TigerResearch/tigerbot-13b-chat)。
改善了模型多轮对话下的上下文关联能力。
以及在部分场景下回答过于"拟人"的情况。
微调前:

微调后:

可以使用配套的[webui](https://huggingface.co/larryvrh/tigerbot-13b-chat-sharegpt-lora/blob/main/chat_webui.py)来进行快速测试。

|
ALM-AHME/convnextv2-large-1k-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20-Shuffled-3rd
|
ALM-AHME
| 2023-08-24T04:14:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"convnextv2",
"image-classification",
"generated_from_trainer",
"base_model:facebook/convnextv2-large-1k-224",
"base_model:finetune:facebook/convnextv2-large-1k-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-24T00:38:09Z |
---
license: apache-2.0
base_model: facebook/convnextv2-large-1k-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: convnextv2-large-1k-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20-Shuffled-3rd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-large-1k-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20-Shuffled-3rd
This model is a fine-tuned version of [facebook/convnextv2-large-1k-224](https://huggingface.co/facebook/convnextv2-large-1k-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0543
- Accuracy: 0.9873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.9
- num_epochs: 14
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5284 | 1.0 | 199 | 0.5013 | 0.9095 |
| 0.2084 | 2.0 | 398 | 0.2076 | 0.9524 |
| 0.1274 | 3.0 | 597 | 0.1459 | 0.9566 |
| 0.1618 | 4.0 | 796 | 0.1534 | 0.9383 |
| 0.2118 | 5.0 | 995 | 0.0877 | 0.9727 |
| 0.0306 | 6.0 | 1194 | 0.1048 | 0.9656 |
| 0.1012 | 7.0 | 1393 | 0.0674 | 0.9755 |
| 0.2079 | 8.0 | 1592 | 0.0662 | 0.9731 |
| 0.087 | 9.0 | 1791 | 0.1183 | 0.9458 |
| 0.1543 | 10.0 | 1990 | 0.0605 | 0.9840 |
| 0.0788 | 11.0 | 2189 | 0.0557 | 0.9868 |
| 0.0604 | 12.0 | 2388 | 0.0461 | 0.9868 |
| 0.0306 | 13.0 | 2587 | 0.0476 | 0.9854 |
| 0.0365 | 14.0 | 2786 | 0.0543 | 0.9873 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
platzi/platzi-vit-model-jose-alcocer
|
platzi
| 2023-08-24T04:08:14Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-23T04:13:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: platzi-vit-model-jose-alcocer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-vit-model-jose-alcocer
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0074
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1491 | 3.85 | 500 | 0.0074 | 1.0 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
|
rtlabs/StableCode-3B
|
rtlabs
| 2023-08-24T04:02:04Z | 17 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"causal-lm",
"code",
"dataset:bigcode/starcoderdata",
"arxiv:2104.09864",
"arxiv:1910.02054",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-23T22:27:04Z |
---
datasets:
- bigcode/starcoderdata
language:
- code
tags:
- causal-lm
model-index:
- name: stabilityai/stablecode-completion-alpha-3b-4k
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.1768
verified: false
- name: pass@10
type: pass@10
value: 0.2701
verified: false
license: apache-2.0
duplicated_from: stabilityai/stablecode-completion-alpha-3b-4k
---
# `StableCode-Completion-Alpha-3B-4K`
## Intro
This is converstion of the `StableCode-Completion-Alpha-3B-4K` model from StabilityAI for use with the FOSS TabbyML Development Toolset, nothing other than converstion to the CTranslate2 compatible format has been undertaken so that the model can be used by TabbyML this included the creation of the appropriate configuration for TabbyML.
## Original Model Description
`StableCode-Completion-Alpha-3B-4K` is a 3 billion parameter decoder-only code completion model pre-trained on diverse set of programming languages that topped the stackoverflow developer survey.
## Usage
The model is intended to do single/multiline code completion from a long context window upto 4k tokens.
Get started generating code with `StableCode-Completion-Alpha-3B-4k` by using the following code snippet:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablecode-completion-alpha-3b-4k")
model = AutoModelForCausalLM.from_pretrained(
"stabilityai/stablecode-completion-alpha-3b-4k",
trust_remote_code=True,
torch_dtype="auto",
)
model.cuda()
inputs = tokenizer("import torch\nimport torch.nn as nn", return_tensors="pt").to("cuda")
tokens = model.generate(
**inputs,
max_new_tokens=48,
temperature=0.2,
do_sample=True,
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
```
## Model Details
* **Developed by**: [Stability AI](https://stability.ai/)
* **Model type**: `StableCode-Completion-Alpha-3B-4k` models are auto-regressive language models based on the transformer decoder architecture.
* **Language(s)**: Code
* **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
* **License**: Model checkpoints are licensed under the [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) license.
* **Contact**: For questions and comments about the model, please email `lm@stability.ai`
### Model Architecture
| Parameters | Hidden Size | Layers | Heads | Sequence Length |
|----------------|-------------|--------|-------|-----------------|
| 2,796,431,360 | 2560 | 32 | 32 | 4096 |
* **Decoder Layer**: Parallel Attention and MLP residuals with a single input LayerNorm ([Wang & Komatsuzaki, 2021](https://github.com/kingoflolz/mesh-transformer-jax/tree/master))
* **Position Embeddings**: Rotary Position Embeddings ([Su et al., 2021](https://arxiv.org/abs/2104.09864))
* **Bias**: LayerNorm bias terms only
## Training
`StableCode-Completion-Alpha-3B-4k` is pre-trained at a context length of 4096 for 300 billion tokens on the `bigcode/starcoder-data`.
### Training Dataset
The first pre-training stage relies on 300B tokens sourced from various top programming languages occuring in the stackoverflow developer survey present in the `starcoder-data` dataset.
### Training Procedure
The model is pre-trained on the dataset mixes mentioned above in mixed-precision BF16), optimized with AdamW, and trained using the [StarCoder](https://huggingface.co/bigcode/starcoder) tokenizer with a vocabulary size of 49k.
* **Software**: We use a fork of gpt-neox ([EleutherAI, 2021](https://github.com/EleutherAI/gpt-neox)) and train under 2D parallelism (Data and Tensor Parallel) with ZeRO-1 ([Rajbhandari et al., 2019](https://arxiv.org/abs/1910.02054v3)) and rely on flash-attention as well as rotary embedding kernels from FlashAttention-2 ([Dao et al., 2023](https://tridao.me/publications/flash2/flash2.pdf))
## Use and Limitations
### Intended Use
StableCode-Completion-Alpha-3B-4K independently generates new code completions, but we recommend that you use StableCode-Completion-Alpha-3B-4K together with the tool developed by BigCode and HuggingFace [(huggingface/huggingface-vscode: Code completion VSCode extension for OSS models (github.com))](https://github.com/huggingface/huggingface-vscode), to identify and, if necessary, attribute any outputs that match training code.
### Limitations and bias
This model is intended to be used responsibly. It is not intended to be used to create unlawful content of any kind, to further any unlawful activity, or to engage in activities with a high risk of physical or economic harm.
## How to cite
```bibtex
@misc{StableCodeCompleteAlpha4K,
url={[https://huggingface.co/stabilityai/stablecode-complete-alpha-3b-4k](https://huggingface.co/stabilityai/stablecode-complete-alpha-3b-4k)},
title={Stable Code Complete Alpha},
author={Adithyan, Reshinth and Phung, Duy and Cooper, Nathan and Pinnaparaju, Nikhil and Laforte, Christian}
}
```
|
xszhou/ppo-LunarLander-v2
|
xszhou
| 2023-08-24T03:44:40Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-24T03:44:16Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.49 +/- 17.20
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dkqjrm/20230824103950
|
dkqjrm
| 2023-08-24T03:36:35Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-24T01:40:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230824103950'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230824103950
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6377
- Accuracy: 0.7401
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 312 | 0.9784 | 0.5307 |
| 0.905 | 2.0 | 624 | 0.6756 | 0.5126 |
| 0.905 | 3.0 | 936 | 0.7039 | 0.5379 |
| 0.7844 | 4.0 | 1248 | 0.6938 | 0.5090 |
| 0.7863 | 5.0 | 1560 | 0.7988 | 0.5487 |
| 0.7863 | 6.0 | 1872 | 0.7152 | 0.5993 |
| 0.7505 | 7.0 | 2184 | 0.7856 | 0.6173 |
| 0.7505 | 8.0 | 2496 | 0.6053 | 0.6606 |
| 0.7043 | 9.0 | 2808 | 0.6424 | 0.5957 |
| 0.7083 | 10.0 | 3120 | 0.7874 | 0.6354 |
| 0.7083 | 11.0 | 3432 | 0.6513 | 0.6390 |
| 0.6321 | 12.0 | 3744 | 0.5910 | 0.7148 |
| 0.6204 | 13.0 | 4056 | 0.5993 | 0.7112 |
| 0.6204 | 14.0 | 4368 | 0.5440 | 0.7292 |
| 0.5835 | 15.0 | 4680 | 0.5542 | 0.7184 |
| 0.5835 | 16.0 | 4992 | 0.6144 | 0.7329 |
| 0.5634 | 17.0 | 5304 | 0.5821 | 0.6968 |
| 0.5461 | 18.0 | 5616 | 0.6826 | 0.5776 |
| 0.5461 | 19.0 | 5928 | 0.5617 | 0.7148 |
| 0.5275 | 20.0 | 6240 | 0.7824 | 0.6643 |
| 0.4726 | 21.0 | 6552 | 0.6157 | 0.7437 |
| 0.4726 | 22.0 | 6864 | 0.6498 | 0.7076 |
| 0.465 | 23.0 | 7176 | 0.6576 | 0.7292 |
| 0.465 | 24.0 | 7488 | 0.5731 | 0.7184 |
| 0.4375 | 25.0 | 7800 | 0.7370 | 0.7220 |
| 0.4182 | 26.0 | 8112 | 0.5957 | 0.7148 |
| 0.4182 | 27.0 | 8424 | 0.6041 | 0.7256 |
| 0.4008 | 28.0 | 8736 | 0.5790 | 0.7184 |
| 0.392 | 29.0 | 9048 | 0.6321 | 0.7329 |
| 0.392 | 30.0 | 9360 | 0.6253 | 0.7148 |
| 0.3691 | 31.0 | 9672 | 0.6031 | 0.7329 |
| 0.3691 | 32.0 | 9984 | 0.5903 | 0.7148 |
| 0.3659 | 33.0 | 10296 | 0.6663 | 0.7329 |
| 0.3375 | 34.0 | 10608 | 0.6000 | 0.7292 |
| 0.3375 | 35.0 | 10920 | 0.5734 | 0.7256 |
| 0.3372 | 36.0 | 11232 | 0.6547 | 0.7329 |
| 0.3242 | 37.0 | 11544 | 0.6508 | 0.7401 |
| 0.3242 | 38.0 | 11856 | 0.6472 | 0.7365 |
| 0.3199 | 39.0 | 12168 | 0.6785 | 0.7365 |
| 0.3199 | 40.0 | 12480 | 0.6019 | 0.7365 |
| 0.3014 | 41.0 | 12792 | 0.5783 | 0.7329 |
| 0.3011 | 42.0 | 13104 | 0.6245 | 0.7329 |
| 0.3011 | 43.0 | 13416 | 0.6497 | 0.7292 |
| 0.2909 | 44.0 | 13728 | 0.6170 | 0.7365 |
| 0.2725 | 45.0 | 14040 | 0.6515 | 0.7437 |
| 0.2725 | 46.0 | 14352 | 0.6511 | 0.7365 |
| 0.286 | 47.0 | 14664 | 0.6303 | 0.7292 |
| 0.286 | 48.0 | 14976 | 0.6408 | 0.7365 |
| 0.2713 | 49.0 | 15288 | 0.7056 | 0.7292 |
| 0.2574 | 50.0 | 15600 | 0.6540 | 0.7365 |
| 0.2574 | 51.0 | 15912 | 0.5996 | 0.7256 |
| 0.2735 | 52.0 | 16224 | 0.6616 | 0.7329 |
| 0.2646 | 53.0 | 16536 | 0.6601 | 0.7365 |
| 0.2646 | 54.0 | 16848 | 0.6284 | 0.7329 |
| 0.2494 | 55.0 | 17160 | 0.6420 | 0.7329 |
| 0.2494 | 56.0 | 17472 | 0.6434 | 0.7401 |
| 0.2512 | 57.0 | 17784 | 0.6324 | 0.7437 |
| 0.2452 | 58.0 | 18096 | 0.6028 | 0.7365 |
| 0.2452 | 59.0 | 18408 | 0.6412 | 0.7401 |
| 0.2491 | 60.0 | 18720 | 0.6377 | 0.7401 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dkqjrm/20230824104100
|
dkqjrm
| 2023-08-24T03:35:48Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-24T01:41:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230824104100'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230824104100
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0729
- Accuracy: 0.7473
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 312 | 0.2294 | 0.5307 |
| 0.3686 | 2.0 | 624 | 0.5346 | 0.4729 |
| 0.3686 | 3.0 | 936 | 0.2223 | 0.5235 |
| 0.2907 | 4.0 | 1248 | 0.1895 | 0.4729 |
| 0.2686 | 5.0 | 1560 | 0.1783 | 0.5018 |
| 0.2686 | 6.0 | 1872 | 0.1995 | 0.5884 |
| 0.2686 | 7.0 | 2184 | 0.3037 | 0.5740 |
| 0.2686 | 8.0 | 2496 | 0.1386 | 0.6715 |
| 0.266 | 9.0 | 2808 | 0.1311 | 0.7076 |
| 0.2363 | 10.0 | 3120 | 0.1403 | 0.6968 |
| 0.2363 | 11.0 | 3432 | 0.2988 | 0.5957 |
| 0.215 | 12.0 | 3744 | 0.1119 | 0.6968 |
| 0.198 | 13.0 | 4056 | 0.1238 | 0.6859 |
| 0.198 | 14.0 | 4368 | 0.1107 | 0.7040 |
| 0.1845 | 15.0 | 4680 | 0.1604 | 0.6570 |
| 0.1845 | 16.0 | 4992 | 0.1143 | 0.7004 |
| 0.1664 | 17.0 | 5304 | 0.1197 | 0.7148 |
| 0.159 | 18.0 | 5616 | 0.1122 | 0.7329 |
| 0.159 | 19.0 | 5928 | 0.1038 | 0.7184 |
| 0.145 | 20.0 | 6240 | 0.0973 | 0.7040 |
| 0.1304 | 21.0 | 6552 | 0.0996 | 0.7292 |
| 0.1304 | 22.0 | 6864 | 0.0938 | 0.7473 |
| 0.1264 | 23.0 | 7176 | 0.1212 | 0.7437 |
| 0.1264 | 24.0 | 7488 | 0.0953 | 0.7256 |
| 0.1212 | 25.0 | 7800 | 0.0899 | 0.7329 |
| 0.1172 | 26.0 | 8112 | 0.1037 | 0.7365 |
| 0.1172 | 27.0 | 8424 | 0.0844 | 0.7292 |
| 0.1122 | 28.0 | 8736 | 0.0850 | 0.7365 |
| 0.1131 | 29.0 | 9048 | 0.0875 | 0.7220 |
| 0.1131 | 30.0 | 9360 | 0.0904 | 0.7437 |
| 0.1082 | 31.0 | 9672 | 0.0883 | 0.7184 |
| 0.1082 | 32.0 | 9984 | 0.0800 | 0.7509 |
| 0.1086 | 33.0 | 10296 | 0.0897 | 0.7509 |
| 0.1015 | 34.0 | 10608 | 0.0837 | 0.7473 |
| 0.1015 | 35.0 | 10920 | 0.0820 | 0.7329 |
| 0.099 | 36.0 | 11232 | 0.0819 | 0.7365 |
| 0.0942 | 37.0 | 11544 | 0.0858 | 0.7509 |
| 0.0942 | 38.0 | 11856 | 0.0793 | 0.7437 |
| 0.0956 | 39.0 | 12168 | 0.0823 | 0.7581 |
| 0.0956 | 40.0 | 12480 | 0.0860 | 0.7256 |
| 0.0921 | 41.0 | 12792 | 0.0753 | 0.7545 |
| 0.0911 | 42.0 | 13104 | 0.0838 | 0.7473 |
| 0.0911 | 43.0 | 13416 | 0.0763 | 0.7545 |
| 0.0894 | 44.0 | 13728 | 0.0761 | 0.7473 |
| 0.0886 | 45.0 | 14040 | 0.0752 | 0.7581 |
| 0.0886 | 46.0 | 14352 | 0.0743 | 0.7437 |
| 0.0855 | 47.0 | 14664 | 0.0759 | 0.7581 |
| 0.0855 | 48.0 | 14976 | 0.0801 | 0.7437 |
| 0.0837 | 49.0 | 15288 | 0.0797 | 0.7473 |
| 0.083 | 50.0 | 15600 | 0.0734 | 0.7509 |
| 0.083 | 51.0 | 15912 | 0.0756 | 0.7545 |
| 0.0845 | 52.0 | 16224 | 0.0744 | 0.7401 |
| 0.084 | 53.0 | 16536 | 0.0731 | 0.7545 |
| 0.084 | 54.0 | 16848 | 0.0736 | 0.7473 |
| 0.0797 | 55.0 | 17160 | 0.0734 | 0.7653 |
| 0.0797 | 56.0 | 17472 | 0.0735 | 0.7545 |
| 0.0803 | 57.0 | 17784 | 0.0737 | 0.7545 |
| 0.0792 | 58.0 | 18096 | 0.0735 | 0.7581 |
| 0.0792 | 59.0 | 18408 | 0.0732 | 0.7581 |
| 0.0815 | 60.0 | 18720 | 0.0729 | 0.7473 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ardt-multipart/ardt-multipart-arrl_train_walker2d_high-2408_0303-66
|
ardt-multipart
| 2023-08-24T03:34:18Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-24T02:04:54Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-multipart-arrl_train_walker2d_high-2408_0303-66
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-multipart-arrl_train_walker2d_high-2408_0303-66
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
JohnnyHiker/llama2-qlora-finetunined-french
|
JohnnyHiker
| 2023-08-24T03:29:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-24T03:29:33Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
dkqjrm/20230824103319
|
dkqjrm
| 2023-08-24T03:23:54Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-24T01:33:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230824103319'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230824103319
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2256
- Accuracy: 0.7473
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 312 | 1.2170 | 0.5307 |
| 0.9844 | 2.0 | 624 | 0.7365 | 0.5090 |
| 0.9844 | 3.0 | 936 | 0.6978 | 0.5632 |
| 0.8956 | 4.0 | 1248 | 0.8855 | 0.4765 |
| 0.8957 | 5.0 | 1560 | 1.0223 | 0.5379 |
| 0.8957 | 6.0 | 1872 | 0.6873 | 0.6137 |
| 0.7665 | 7.0 | 2184 | 0.8629 | 0.6173 |
| 0.7665 | 8.0 | 2496 | 0.6861 | 0.6570 |
| 0.734 | 9.0 | 2808 | 0.6714 | 0.7076 |
| 0.7238 | 10.0 | 3120 | 0.6298 | 0.7184 |
| 0.7238 | 11.0 | 3432 | 0.5975 | 0.7184 |
| 0.6786 | 12.0 | 3744 | 0.8311 | 0.6968 |
| 0.6396 | 13.0 | 4056 | 0.7136 | 0.6751 |
| 0.6396 | 14.0 | 4368 | 0.7183 | 0.6859 |
| 0.6481 | 15.0 | 4680 | 0.6652 | 0.7076 |
| 0.6481 | 16.0 | 4992 | 1.0367 | 0.6823 |
| 0.6106 | 17.0 | 5304 | 0.7197 | 0.6895 |
| 0.6011 | 18.0 | 5616 | 0.6058 | 0.7292 |
| 0.6011 | 19.0 | 5928 | 0.7227 | 0.7112 |
| 0.5978 | 20.0 | 6240 | 1.1472 | 0.6570 |
| 0.5309 | 21.0 | 6552 | 0.6741 | 0.7256 |
| 0.5309 | 22.0 | 6864 | 0.9335 | 0.6787 |
| 0.5392 | 23.0 | 7176 | 0.8296 | 0.7365 |
| 0.5392 | 24.0 | 7488 | 0.9097 | 0.7040 |
| 0.5058 | 25.0 | 7800 | 0.8278 | 0.7292 |
| 0.4669 | 26.0 | 8112 | 1.0859 | 0.6498 |
| 0.4669 | 27.0 | 8424 | 0.9387 | 0.7184 |
| 0.462 | 28.0 | 8736 | 1.0893 | 0.7365 |
| 0.4757 | 29.0 | 9048 | 1.3568 | 0.6859 |
| 0.4757 | 30.0 | 9360 | 1.0252 | 0.7040 |
| 0.4237 | 31.0 | 9672 | 1.0489 | 0.7329 |
| 0.4237 | 32.0 | 9984 | 0.8661 | 0.7292 |
| 0.4275 | 33.0 | 10296 | 0.9781 | 0.7437 |
| 0.3722 | 34.0 | 10608 | 0.8879 | 0.7329 |
| 0.3722 | 35.0 | 10920 | 0.9932 | 0.7292 |
| 0.3741 | 36.0 | 11232 | 1.0509 | 0.7365 |
| 0.3358 | 37.0 | 11544 | 1.3875 | 0.7329 |
| 0.3358 | 38.0 | 11856 | 1.2366 | 0.7220 |
| 0.3415 | 39.0 | 12168 | 1.0563 | 0.7329 |
| 0.3415 | 40.0 | 12480 | 0.9688 | 0.7401 |
| 0.3357 | 41.0 | 12792 | 0.8598 | 0.7329 |
| 0.3094 | 42.0 | 13104 | 1.0506 | 0.7329 |
| 0.3094 | 43.0 | 13416 | 1.3257 | 0.7365 |
| 0.2947 | 44.0 | 13728 | 1.1759 | 0.7365 |
| 0.2832 | 45.0 | 14040 | 1.1699 | 0.7329 |
| 0.2832 | 46.0 | 14352 | 1.1070 | 0.7401 |
| 0.2808 | 47.0 | 14664 | 1.1519 | 0.7473 |
| 0.2808 | 48.0 | 14976 | 1.0674 | 0.7401 |
| 0.2715 | 49.0 | 15288 | 1.1491 | 0.7401 |
| 0.252 | 50.0 | 15600 | 1.0819 | 0.7473 |
| 0.252 | 51.0 | 15912 | 0.9650 | 0.7473 |
| 0.2577 | 52.0 | 16224 | 1.0753 | 0.7437 |
| 0.2579 | 53.0 | 16536 | 1.0896 | 0.7473 |
| 0.2579 | 54.0 | 16848 | 1.0579 | 0.7401 |
| 0.2395 | 55.0 | 17160 | 1.1172 | 0.7509 |
| 0.2395 | 56.0 | 17472 | 1.1540 | 0.7509 |
| 0.2392 | 57.0 | 17784 | 1.2162 | 0.7509 |
| 0.22 | 58.0 | 18096 | 1.1978 | 0.7509 |
| 0.22 | 59.0 | 18408 | 1.2381 | 0.7473 |
| 0.2242 | 60.0 | 18720 | 1.2256 | 0.7473 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
shareAI/bimoGPT-llama2-13b
|
shareAI
| 2023-08-24T03:22:57Z | 0 | 7 |
transformers
|
[
"transformers",
"question-answering",
"zh",
"en",
"dataset:shareAI/ShareGPT-Chinese-English-90k",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-03T03:35:44Z |
---
license: openrail
datasets:
- shareAI/ShareGPT-Chinese-English-90k
language:
- zh
- en
library_name: transformers
pipeline_tag: question-answering
---
bimoGPT - 一个在llama2 13b基座模型上做中文精细SFT的版本,拥有接近ChatGPT的语气和对话问答能力,以及不错的代码编程能力。
底座:https://www.codewithgpu.com/m/file/llama2-13b-Chinese-chat (中的llama2-13B-sharegpt_cn-epoch2.zip)
|
alexdphan/bloom_prompt_tuning_1692845692.2492282
|
alexdphan
| 2023-08-24T03:19:05Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-24T03:19:03Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
NEU-HAI/Llama-2-7b-alpaca-cleaned
|
NEU-HAI
| 2023-08-24T02:51:32Z | 107 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2",
"alpaca",
"en",
"dataset:yahma/alpaca-cleaned",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-22T18:04:30Z |
---
license: cc-by-nc-4.0
datasets:
- yahma/alpaca-cleaned
language:
- en
pipeline_tag: text-generation
tags:
- llama-2
- alpaca
---
# Model Card for Llama-2-7b-alpaca-cleaned
<!-- Provide a quick summary of what the model is/does. -->
This model checkpoint is the Llama-2-7b fine-tuned on [alpaca-cleaned dataset](https://huggingface.co/datasets/yahma/alpaca-cleaned) with the original Alpaca fine-tuning hyper-parameters.
## Model Details
### Model Description
This model checkpoint is the Llama-2-7b fine-tuned on [alpaca-cleaned dataset](https://huggingface.co/datasets/yahma/alpaca-cleaned) with the original Alpaca fine-tuning hyper-parameters. \
The original Alpaca model is fine-tuned on Llama with the alpaca dataset by researchers from Stanford University
- **Developed by:** NEU Human-centered AI Lab
- **Shared by [optional]:** NEU Human-centered AI Lab
- **Model type:** Text-generation
- **Language(s) (NLP):** English
- **License:** cc-by-nc-4.0 (comply with the alpaca-cleaned dataset)
- **Finetuned from model [optional]:** [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://huggingface.co/meta-llama/Llama-2-7b
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
The model is intended to be used for research purposes only in English, complying with [stanford_alpaca project](https://github.com/tatsu-lab/stanford_alpaca). \
The model has been fine-tuned on the [alpaca-cleaned dataset](https://huggingface.co/datasets/yahma/alpaca-cleaned) for assistant-like chat and general natural language generation tasks. \
The use of this model should also comply with the restrictions from [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
The out-of-Scope use of this model should also comply with [stanford_alpaca project](https://github.com/tatsu-lab/stanford_alpaca) and [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b).
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
{{ bias_risks_limitations | default("[More Information Needed]", true)}}
## How to Get Started with the Model
Use the code below to get started with the model.
```
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("NEU-HAI/Llama-2-7b-alpaca-cleaned")
model = AutoModelForCausalLM.from_pretrained("NEU-HAI/Llama-2-7b-alpaca-cleaned")
```
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
We use the [alpaca-cleaned dataset](https://huggingface.co/datasets/yahma/alpaca-cleaned), which is the cleaned version of the original [alpaca dataset](https://huggingface.co/datasets/tatsu-lab/alpaca) created by researchers from Stanford University.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
We follow the same training procedure and mostly same hyper-parameters to fine-tune the original Alpaca model on Llama. The procedure can be found in [stanford_alpaca project](https://github.com/tatsu-lab/stanford_alpaca).
#### Training Hyperparameters
```
--bf16 True \
--num_train_epochs 3 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 2000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--fsdp "full_shard auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
--tf32 True
```
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
N/A
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
N/A
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
N/A
### Results
N/A
#### Summary
N/A
<!--
## Environmental Impact
Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** {{ hardware | default("[More Information Needed]", true)}}
- **Hours used:** {{ hours_used | default("[More Information Needed]", true)}}
- **Cloud Provider:** {{ cloud_provider | default("[More Information Needed]", true)}}
- **Compute Region:** {{ cloud_region | default("[More Information Needed]", true)}}
- **Carbon Emitted:** {{ co2_emitted | default("[More Information Needed]", true)}}
-->
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
Please cite the [stanford_alpaca project](https://github.com/tatsu-lab/stanford_alpaca)
```
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
## Model Card Authors
Northeastern Human-centered AI Lab
## Model Card Contact
|
Timucin/q-Taxi
|
Timucin
| 2023-08-24T02:44:53Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-24T02:44:51Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Timucin/q-Taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mcwei/rvinpaint
|
mcwei
| 2023-08-24T02:39:47Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-24T00:41:06Z |
---
license: creativeml-openrail-m
---
|
Timucin/q-FrozenLake-v1-4x4-noSlippery
|
Timucin
| 2023-08-24T02:38:11Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-24T02:38:09Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Timucin/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ausboss/llama2-13b-supercot-loras
|
ausboss
| 2023-08-24T02:24:20Z | 0 | 5 | null |
[
"region:us"
] | null | 2023-08-21T15:15:43Z |
# Llama-2-13b SuperCOT lora checkpoints
These are my Llama-2-13b SuperCOT Lora checkpoints trained using QLora on the [SuperCOT Dataset](https://huggingface.co/datasets/kaiokendev/SuperCOT-dataset).
### Architecture
- **Model Architecture**: Llama-2-13b
- **Training Algorithm**: QLora
### Training Details
- **Dataset**: [SuperCOT Dataset](https://huggingface.co/datasets/kaiokendev/SuperCOT-dataset)
- **Datset type**: alpaca
- **Training Parameters**: [See Here](https://github.com/OpenAccess-AI-Collective/axolotl/blob/main/examples/llama-2/qlora.yml)
- **Training Environment**: Axolotl
- **sequence_len**: 4096
## Acknowledgments
Special thanks to the creators of the datasets in SuperCOT. Additionally, thanks to Kaiokendev for curating the SuperCOT dataset. Thanks to the contributors of the Axolotl.
## Stuff generated from axolotl:
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
LarryAIDraw/Lucy-08
|
LarryAIDraw
| 2023-08-24T02:23:59Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-24T02:06:40Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/132939/lucy-seiland-trails-of-cold-steel-4-sen-no-kiseki-4
|
LarryAIDraw/Aurier-10
|
LarryAIDraw
| 2023-08-24T02:23:33Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-24T02:07:09Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/132943/aurier-vander-trails-of-cold-steel-3-sen-no-kiseki-3
|
LarryAIDraw/AnneHalfordExp
|
LarryAIDraw
| 2023-08-24T02:22:12Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-24T02:06:14Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/133130/anne-halford-sugar-apple-fairy-tale
|
LarryAIDraw/Fuwawa_Abyssgard-10
|
LarryAIDraw
| 2023-08-24T02:20:14Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-24T02:05:02Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/117233/fuwawa-abyssgard-hololive-en-lora
|
LarryAIDraw/Atago_and_Takao_20230820183759-000014
|
LarryAIDraw
| 2023-08-24T02:19:06Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-24T02:03:56Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/133344/atago-and-tako-lora
|
LarryAIDraw/shimanto
|
LarryAIDraw
| 2023-08-24T02:18:34Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-24T02:03:25Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/133172/ijn-shimanto-or-azur-lane
|
LarryAIDraw/ChristinaHope
|
LarryAIDraw
| 2023-08-24T02:17:07Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-24T02:02:16Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/133295/christina-hope-the-eminence-in-shadow
|
LarryAIDraw/CHAR-FuwawaAbyssgard
|
LarryAIDraw
| 2023-08-24T02:16:49Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-24T02:01:23Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/132928/fuwawa-abyssgard-or-hololive
|
lianlian123/Reinforce-CartPole8
|
lianlian123
| 2023-08-24T02:14:00Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-23T08:21:31Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole8
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ardt-multipart/ardt-multipart-arrl_train_walker2d_high-2408_0127-33
|
ardt-multipart
| 2023-08-24T02:03:02Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-24T00:28:40Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-multipart-arrl_train_walker2d_high-2408_0127-33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-multipart-arrl_train_walker2d_high-2408_0127-33
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
JJinBBangMan/marian-finetuned-kde4-en-to-fr
|
JJinBBangMan
| 2023-08-24T02:00:41Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-08-24T00:10:39Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.853174528380514
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8568
- Bleu: 52.8532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
cooperic/distilbert-base-uncased-finetuned-emotion
|
cooperic
| 2023-08-24T01:49:06Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-24T00:31:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9285
- name: F1
type: f1
value: 0.9283528881025964
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2174
- Accuracy: 0.9285
- F1: 0.9284
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8012 | 1.0 | 250 | 0.3094 | 0.9095 | 0.9083 |
| 0.2454 | 2.0 | 500 | 0.2174 | 0.9285 | 0.9284 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dkqjrm/20230824083011
|
dkqjrm
| 2023-08-24T01:45:30Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-23T23:30:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230824083011'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230824083011
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3090
- Accuracy: 0.7401
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7501 | 1.0 | 623 | 0.9859 | 0.4729 |
| 0.6252 | 2.0 | 1246 | 0.4891 | 0.4801 |
| 0.5769 | 3.0 | 1869 | 1.1271 | 0.4729 |
| 0.5672 | 4.0 | 2492 | 0.4257 | 0.5632 |
| 0.5439 | 5.0 | 3115 | 0.5883 | 0.5415 |
| 0.5426 | 6.0 | 3738 | 0.3734 | 0.6245 |
| 0.61 | 7.0 | 4361 | 0.4410 | 0.5848 |
| 0.4937 | 8.0 | 4984 | 0.4091 | 0.5632 |
| 0.4293 | 9.0 | 5607 | 0.3712 | 0.6282 |
| 0.3897 | 10.0 | 6230 | 0.3441 | 0.6931 |
| 0.3759 | 11.0 | 6853 | 0.3400 | 0.7004 |
| 0.379 | 12.0 | 7476 | 0.3802 | 0.6787 |
| 0.3661 | 13.0 | 8099 | 0.3456 | 0.7184 |
| 0.374 | 14.0 | 8722 | 0.3545 | 0.6859 |
| 0.3441 | 15.0 | 9345 | 0.3219 | 0.7112 |
| 0.3339 | 16.0 | 9968 | 0.3192 | 0.7184 |
| 0.3324 | 17.0 | 10591 | 0.3290 | 0.7184 |
| 0.324 | 18.0 | 11214 | 0.3284 | 0.7112 |
| 0.3641 | 19.0 | 11837 | 0.3100 | 0.7292 |
| 0.3138 | 20.0 | 12460 | 0.3102 | 0.7365 |
| 0.3099 | 21.0 | 13083 | 0.3887 | 0.7076 |
| 0.3095 | 22.0 | 13706 | 0.3443 | 0.7004 |
| 0.3039 | 23.0 | 14329 | 0.3937 | 0.6895 |
| 0.287 | 24.0 | 14952 | 0.3071 | 0.7473 |
| 0.2718 | 25.0 | 15575 | 0.3097 | 0.7184 |
| 0.2711 | 26.0 | 16198 | 0.2888 | 0.7329 |
| 0.2738 | 27.0 | 16821 | 0.2920 | 0.7220 |
| 0.2697 | 28.0 | 17444 | 0.2986 | 0.7329 |
| 0.2589 | 29.0 | 18067 | 0.3092 | 0.7437 |
| 0.2536 | 30.0 | 18690 | 0.3141 | 0.7292 |
| 0.2564 | 31.0 | 19313 | 0.3134 | 0.7401 |
| 0.2493 | 32.0 | 19936 | 0.2962 | 0.7365 |
| 0.2428 | 33.0 | 20559 | 0.3358 | 0.7256 |
| 0.2425 | 34.0 | 21182 | 0.3155 | 0.7148 |
| 0.2342 | 35.0 | 21805 | 0.3000 | 0.7220 |
| 0.2394 | 36.0 | 22428 | 0.2955 | 0.7329 |
| 0.2257 | 37.0 | 23051 | 0.3070 | 0.7509 |
| 0.2272 | 38.0 | 23674 | 0.2959 | 0.7365 |
| 0.2197 | 39.0 | 24297 | 0.3100 | 0.7401 |
| 0.2144 | 40.0 | 24920 | 0.3009 | 0.7365 |
| 0.2164 | 41.0 | 25543 | 0.2957 | 0.7256 |
| 0.2129 | 42.0 | 26166 | 0.3133 | 0.7292 |
| 0.2106 | 43.0 | 26789 | 0.3110 | 0.7329 |
| 0.2069 | 44.0 | 27412 | 0.3072 | 0.7329 |
| 0.2051 | 45.0 | 28035 | 0.3300 | 0.7292 |
| 0.2064 | 46.0 | 28658 | 0.3106 | 0.7256 |
| 0.2039 | 47.0 | 29281 | 0.3114 | 0.7292 |
| 0.2106 | 48.0 | 29904 | 0.3180 | 0.7365 |
| 0.2008 | 49.0 | 30527 | 0.3099 | 0.7329 |
| 0.1945 | 50.0 | 31150 | 0.3066 | 0.7329 |
| 0.1958 | 51.0 | 31773 | 0.3124 | 0.7401 |
| 0.1939 | 52.0 | 32396 | 0.3230 | 0.7401 |
| 0.1942 | 53.0 | 33019 | 0.3105 | 0.7365 |
| 0.1887 | 54.0 | 33642 | 0.3014 | 0.7256 |
| 0.185 | 55.0 | 34265 | 0.3052 | 0.7365 |
| 0.1868 | 56.0 | 34888 | 0.3155 | 0.7365 |
| 0.1888 | 57.0 | 35511 | 0.3056 | 0.7256 |
| 0.1885 | 58.0 | 36134 | 0.3069 | 0.7329 |
| 0.192 | 59.0 | 36757 | 0.3076 | 0.7329 |
| 0.1807 | 60.0 | 37380 | 0.3090 | 0.7401 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dkqjrm/20230824082958
|
dkqjrm
| 2023-08-24T01:33:05Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-23T23:30:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230824082958'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230824082958
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5547
- Accuracy: 0.7581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1252 | 1.0 | 623 | 0.6915 | 0.5415 |
| 0.9382 | 2.0 | 1246 | 0.7221 | 0.5307 |
| 1.0555 | 3.0 | 1869 | 0.7387 | 0.5199 |
| 0.9336 | 4.0 | 2492 | 0.9751 | 0.6390 |
| 0.8894 | 5.0 | 3115 | 0.9277 | 0.6643 |
| 0.9066 | 6.0 | 3738 | 1.1836 | 0.6931 |
| 0.8496 | 7.0 | 4361 | 0.8242 | 0.7184 |
| 0.7761 | 8.0 | 4984 | 0.9061 | 0.6859 |
| 0.8175 | 9.0 | 5607 | 0.7474 | 0.7220 |
| 0.7575 | 10.0 | 6230 | 0.8582 | 0.7292 |
| 0.747 | 11.0 | 6853 | 0.8351 | 0.7256 |
| 0.728 | 12.0 | 7476 | 0.8912 | 0.7148 |
| 0.8296 | 13.0 | 8099 | 0.9471 | 0.7220 |
| 0.7327 | 14.0 | 8722 | 1.1407 | 0.7148 |
| 0.7284 | 15.0 | 9345 | 0.7681 | 0.7256 |
| 0.6642 | 16.0 | 9968 | 1.4084 | 0.6679 |
| 0.5888 | 17.0 | 10591 | 0.8413 | 0.7329 |
| 0.6074 | 18.0 | 11214 | 0.7461 | 0.7401 |
| 0.625 | 19.0 | 11837 | 0.9516 | 0.7545 |
| 0.5911 | 20.0 | 12460 | 1.3395 | 0.7292 |
| 0.5322 | 21.0 | 13083 | 1.3924 | 0.7509 |
| 0.5247 | 22.0 | 13706 | 1.1553 | 0.7256 |
| 0.5146 | 23.0 | 14329 | 1.6692 | 0.7040 |
| 0.4493 | 24.0 | 14952 | 1.2315 | 0.7437 |
| 0.399 | 25.0 | 15575 | 1.2710 | 0.7545 |
| 0.3644 | 26.0 | 16198 | 1.5049 | 0.7473 |
| 0.4031 | 27.0 | 16821 | 1.5735 | 0.7401 |
| 0.386 | 28.0 | 17444 | 1.4749 | 0.7220 |
| 0.3735 | 29.0 | 18067 | 0.9541 | 0.7365 |
| 0.356 | 30.0 | 18690 | 1.3936 | 0.7473 |
| 0.3496 | 31.0 | 19313 | 0.9982 | 0.7437 |
| 0.3149 | 32.0 | 19936 | 0.9572 | 0.7581 |
| 0.3094 | 33.0 | 20559 | 1.5663 | 0.7256 |
| 0.2886 | 34.0 | 21182 | 1.5993 | 0.7365 |
| 0.2545 | 35.0 | 21805 | 1.1515 | 0.7545 |
| 0.276 | 36.0 | 22428 | 1.2768 | 0.7473 |
| 0.2645 | 37.0 | 23051 | 1.4290 | 0.7509 |
| 0.262 | 38.0 | 23674 | 1.2363 | 0.7617 |
| 0.2261 | 39.0 | 24297 | 1.3446 | 0.7617 |
| 0.2291 | 40.0 | 24920 | 1.0532 | 0.7509 |
| 0.2178 | 41.0 | 25543 | 1.4745 | 0.7509 |
| 0.2104 | 42.0 | 26166 | 1.3830 | 0.7545 |
| 0.217 | 43.0 | 26789 | 1.7099 | 0.7473 |
| 0.214 | 44.0 | 27412 | 1.7054 | 0.7401 |
| 0.1856 | 45.0 | 28035 | 1.4350 | 0.7545 |
| 0.2014 | 46.0 | 28658 | 1.7266 | 0.7473 |
| 0.1759 | 47.0 | 29281 | 1.2659 | 0.7581 |
| 0.2027 | 48.0 | 29904 | 1.8336 | 0.7401 |
| 0.1871 | 49.0 | 30527 | 1.3398 | 0.7509 |
| 0.1586 | 50.0 | 31150 | 1.4948 | 0.7509 |
| 0.1619 | 51.0 | 31773 | 1.3787 | 0.7545 |
| 0.1665 | 52.0 | 32396 | 1.6532 | 0.7545 |
| 0.1786 | 53.0 | 33019 | 1.4697 | 0.7581 |
| 0.1609 | 54.0 | 33642 | 1.5462 | 0.7653 |
| 0.1304 | 55.0 | 34265 | 1.3577 | 0.7581 |
| 0.1576 | 56.0 | 34888 | 1.7004 | 0.7617 |
| 0.1522 | 57.0 | 35511 | 1.4629 | 0.7581 |
| 0.1496 | 58.0 | 36134 | 1.6336 | 0.7581 |
| 0.1406 | 59.0 | 36757 | 1.5699 | 0.7545 |
| 0.1268 | 60.0 | 37380 | 1.5547 | 0.7581 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
kimjaewon/whisper-tiny-us
|
kimjaewon
| 2023-08-24T01:25:56Z | 80 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-23T08:52:00Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-us
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[450:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.35832349468713104
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-us
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6051
- Wer Ortho: 0.3646
- Wer: 0.3583
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0024 | 17.86 | 500 | 0.6051 | 0.3646 | 0.3583 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
nxnhjrjtbjfzhrovwl/limarp-llongma2-8k-ggml-f16
|
nxnhjrjtbjfzhrovwl
| 2023-08-24T01:12:05Z | 0 | 0 | null |
[
"arxiv:2305.11206",
"license:agpl-3.0",
"region:us"
] | null | 2023-08-23T18:04:50Z |
---
'[object Object]': null
license: agpl-3.0
---
This repository contains the unquantized merge of [limarp-llongma2-8k lora](https://huggingface.co/lemonilia/limarp-llongma2-8k) in ggml format.
You can quantize the f16 ggml to the quantization of your choice by following the below steps:
1. Download and extract the [llama.cpp binaries](https://github.com/ggerganov/llama.cpp/releases/download/master-41c6741/llama-master-41c6741-bin-win-avx2-x64.zip) ([or compile it yourself if you're on Linux](https://github.com/ggerganov/llama.cpp#build))
2. Move the "quantize" executable to the same folder where you downloaded the f16 ggml model.
3. Open a command prompt window in that same folder and write the following command, making the changes that you see fit.
```bash
quantize.exe limarp-llongma2-13b.ggmlv3.f16.bin limarp-llongma2-13b.ggmlv3.q4_0.bin q4_0
```
4. Press enter to run the command and the quantized model will be generated in the folder.
The below are the contents of the original model card:
# Model Card for LimaRP-LLongMA2-8k-v2
LimaRP-LLongMA2-8k is an experimental [Llama2](https://huggingface.co/meta-llama) finetune narrowly focused on novel-style roleplay chatting, and a continuation of the previously released [LimaRP-llama2](https://huggingface.co/lemonilia/limarp-llama2) with a larger number of training tokens (+95%).
To considerably facilitate uploading, distribution and merging with other models, LoRA adapters are provided. LimaRP-LLongMA2 LoRA adapters, as their name suggests, are intended to be applied on LLongMA-2 models with 8k context ([7B](https://huggingface.co/conceptofmind/LLongMA-2-7b) and [13B](https://huggingface.co/conceptofmind/LLongMA-2-13b)) and their derivatives.
Data updates may be posted in the future. The current version is **v3**.
## Model Details
### Model Description
This is an experimental attempt at creating an RP-oriented fine-tune using a manually-curated, high-quality dataset of human-generated conversations. The main rationale for this are the observations from [Zhou et al.](https://arxiv.org/abs/2305.11206). The authors suggested that just 1000-2000 carefully curated training examples may yield high quality output for assistant-type chatbots. This is in contrast with the commonly employed strategy where a very large number of training examples (tens of thousands to even millions) of widely varying quality are used.
For LimaRP a similar approach was used, with the difference that the conversational data is almost entirely human-generated. Every training example is manually compiled and selected to comply with subjective quality parameters, with virtually no chance for OpenAI-style alignment responses to come up.
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The model is intended to approximate the experience of 1-on-1 roleplay as observed on many Internet forums dedicated on roleplaying. It _must_ be used with a specific format similar to that of this template:
```
<<SYSTEM>>
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. You must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User. Character should respond with messages of medium length.
<<AIBOT>>
Character: {utterance}
<<HUMAN>>
User: {utterance}
[etc.]
```
With `<<SYSTEM>>`, `<<AIBOT>>` and `<<HUMAN>>` being special instruct-mode sequences. The text under curly braces must be replaced with appropriate text in _natural language_. Replace `User` and `Character` with actual character names.
This more graphical breakdown of the prompt format with a practical example might make it clearer:

### More detailed notes on prompt format, usage and other settings
- **The model has been tested mainly using Oobabooga's `text-generation-webui` as a backend**
- Preferably respect spacing and newlines shown above. This might not be possible yet with some frontends.
- Replace `Character` and `User` in the above template with your desired names.
- The scenario description has a large influence on what the character will do. Try to keep it more open-ended to lessen its impact.
- **The model expects users and characters to use third-person narration in simple past and enclose dialogues with standard quotation marks `" "`.** Other formats are not supported (= not in the training data).
- Do not use newlines in Persona and Scenario. Use natural language.
- The last line in `<<SYSTEM>>` does not need to be written exactly as depicted, but should mention that `Character` and `User` will engage in roleplay and specify the length of `Character`'s messages
- The message lengths used during training are: `tiny`, `short`, `average`, `long`, `huge`, `humongous`. However, there might not have been enough training examples for each length for this instruction to have a significant impact. The preferred lengths for this type of role-playing are `average` or `long`.
- Suggested text generation settings:
- Temperature ~0.70
- Tail-Free Sampling 0.85
- Repetition penalty ~1.10 (Compared to LLaMAv1, Llama2 appears to require a somewhat higher rep.pen.)
- Not used: Top-P (disabled/set to 1.0), Top-K (disabled/set to 0), Typical P (disabled/set to 1.0)
### Sample character cards
Here are a few example **SillyTavern character cards** following the required format; download and import into SillyTavern. Feel free to modify and adapt them to your purposes.
- [Carina, a 'big sister' android maid](https://files.catbox.moe/1qcqqj.png)
- [Charlotte, a cute android maid](https://files.catbox.moe/k1x9a7.png)
- [Etma, an 'aligned' AI assistant](https://files.catbox.moe/dj8ggi.png)
- [Mila, an anthro pet catgirl](https://files.catbox.moe/amnsew.png)
- [Samuel, a handsome vampire](https://files.catbox.moe/f9uiw1.png)
And here is a sample of how the model is intended to behave with proper chat and prompt formatting: https://files.catbox.moe/egfd90.png
### Other tips
It's possible to make the model automatically generate random character information and scenario by adding just `<<SYSTEM>>` and the character name in text completion mode in `text-generation-webui`, as done here (click to enlarge). The format generally closely matches that of the training data:

### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
The model has not been tested for:
- IRC-style chat
- Markdown-style roleplay (asterisks for actions, dialogue lines without quotation marks)
- Storywriting
- Usage without the suggested prompt format
Furthermore, the model is not intended nor expected to provide factual and accurate information on any subject.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
The model may easily output disturbing and socially inappropriate content and therefore should not be used by minors or within environments where a general audience is expected. Its outputs will have in general a strong NSFW bias unless the character card/description de-emphasizes it.
## How to Get Started with the Model
Download and load with `text-generation-webui` as a back-end application. It's suggested to start the `webui` via command line. Assuming you have copied the LoRA files under a subdirectory called `lora/limarp-llongma2-7b`, you would use something like this for the 7B model:
```
python server.py --api --verbose --model LLongMA-7B --lora limarp-llongma2-7b
```
When using 4-bit `bitsnbytes` it is suggested to use double quantization to increase accuracy. The starting command may be something like this:
```
python server.py --verbose --api --model LLongMA-2-13B --lora limarp13-llongma2-13b --load-in-4bit --use_double_quant
```
Then, preferably use [SillyTavern](https://github.com/SillyTavern/SillyTavern) as a front-end using the following settings:

In addition of enabling the instruct mode with the correct sequences, it's particularly important to **enable "Include names"**, as the model was trained with them at the start of each utterance. If it's disabled, the model can start getting confused and often write for the user in its responses.
To take advantage of this model's larger context length, unlock the context size and set it up to any length up to 8192 tokens, depending on your VRAM constraints. On most consumer GPUs this will likely need to be set to a lower value.

It is **recommended to ban/disable the EOS token** as it can for instance apparently give [artifacts or tokenization issues](https://files.catbox.moe/cxfrzu.png) when it ends up getting generated close to punctuation or quotation marks, at least in SillyTavern. These would typically happen
with AI responses.

## Training Details
### Training Data
The training data comprises about **1500** manually edited roleplaying conversation threads from various Internet RP forums, for about **24 megabytes** of data.
Character and Scenario information was initially filled in for every thread with the help of mainly `gpt-4`. Later on this has been accomplished with a custom summarizer. Conversations in the dataset are almost entirely human-generated except for a handful of messages. Character names in the RP stories have been isolated and replaced with standard placeholder strings. Usernames, out-of-context (OOC) messages and personal information have not been intentionally included.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
The version of LimaRP uploaded on this repository was trained using a small NVidia A40 cluster in 8-bit with regular LoRA adapters and 8-bit AdamW optimizer.
#### Training Hyperparameters
The most important settings were as follows:
- --learning_rate 0.000065
- --lr_scheduler_type cosine
- --lora_r 8
- --lora_alpha 16
- --lora_dropout 0.01
- --num_train_epochs 2
- --bf16 True
- --tf32 True
- --bits 8
- --per_device_train_batch_size 1
- --gradient_accumulation_steps 1
- --optim adamw_bnb_8bit
**All linear LoRA layers** were targeted.
An effective batch size of 1 was found to yield the lowest loss curves during fine-tuning. It was also found that using `--train_on_source False` with the entire training example at the output yields similar results. These LoRAs have been trained in this way (similar to what was done with [Guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) or as with unsupervised finetuning).
<!-- ## Evaluation -->
<!-- This section describes the evaluation protocols and provides the results. -->
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Finetuning this model on 8 NVidia A40 48GB in parallel takes about 25 minutes (7B) or 45 minutes (13B).
|
michaelriedl/MonsterForgeFusion-sd-2-base
|
michaelriedl
| 2023-08-24T01:06:20Z | 5 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-base",
"base_model:adapter:stabilityai/stable-diffusion-2-base",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-24T00:46:11Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-2-base
tags:
- stable-diffusion
- text-to-image
- diffusers
- lora
inference: true
---
|
LBR47/wav2vec2-base-finetuned-gtzan
|
LBR47
| 2023-08-24T01:05:57Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:bookbot/distil-ast-audioset",
"base_model:finetune:bookbot/distil-ast-audioset",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-14T04:15:04Z |
---
license: apache-2.0
base_model: bookbot/distil-ast-audioset
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: train
split: train
args: train
metrics:
- name: Accuracy
type: accuracy
value: 0.89
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distil-ast-audioset-finetuned-gtzan
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7907
- Accuracy: 0.89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
YassineBenlaria/m2m100_418M_tq_fr_1
|
YassineBenlaria
| 2023-08-24T00:47:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"base_model:YassineBenlaria/m2m100_418M_tq_fr",
"base_model:finetune:YassineBenlaria/m2m100_418M_tq_fr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-23T14:19:35Z |
---
base_model: heisenberg1337/m2m100_418M_tq_fr
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: m2m100_418M_tq_fr_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M_tq_fr_1
This model is a fine-tuned version of [heisenberg1337/m2m100_418M_tq_fr](https://huggingface.co/heisenberg1337/m2m100_418M_tq_fr) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8665
- Bleu: 5.8216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8405 | 0.97 | 100 | 0.8682 | 5.4390 |
| 0.8303 | 1.94 | 200 | 0.8661 | 5.3736 |
| 0.8245 | 2.91 | 300 | 0.8616 | 5.5394 |
| 0.807 | 3.87 | 400 | 0.8632 | 5.4620 |
| 0.7954 | 4.84 | 500 | 0.8637 | 5.6718 |
| 0.7827 | 5.81 | 600 | 0.8665 | 5.8216 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
pabloyesteb/a2c-PandaReachDense-v3
|
pabloyesteb
| 2023-08-24T00:21:05Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-24T00:15:07Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.21 +/- 0.10
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nlpnlp/xlm-roberta-base-finetuned-panx-de
|
nlpnlp
| 2023-08-24T00:04:07Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-23T17:08:22Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8600170502983802
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1391
- F1: 0.8600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2598 | 1.0 | 525 | 0.1697 | 0.8177 |
| 0.1253 | 2.0 | 1050 | 0.1343 | 0.8509 |
| 0.0812 | 3.0 | 1575 | 0.1391 | 0.8600 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cpu
- Datasets 2.14.4
- Tokenizers 0.13.3
|
dkqjrm/20230824064723
|
dkqjrm
| 2023-08-23T23:40:55Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-23T21:47:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230824064723'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230824064723
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6742
- Accuracy: 0.7076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 312 | 1.0968 | 0.5307 |
| 0.8903 | 2.0 | 624 | 0.9977 | 0.4729 |
| 0.8903 | 3.0 | 936 | 0.6500 | 0.5415 |
| 0.813 | 4.0 | 1248 | 0.8148 | 0.4729 |
| 0.7606 | 5.0 | 1560 | 0.6263 | 0.5993 |
| 0.7606 | 6.0 | 1872 | 0.7920 | 0.6245 |
| 0.7342 | 7.0 | 2184 | 1.2811 | 0.5884 |
| 0.7342 | 8.0 | 2496 | 0.5840 | 0.6462 |
| 0.6906 | 9.0 | 2808 | 0.5715 | 0.6751 |
| 0.6551 | 10.0 | 3120 | 0.5806 | 0.6859 |
| 0.6551 | 11.0 | 3432 | 0.5498 | 0.6823 |
| 0.6197 | 12.0 | 3744 | 0.6886 | 0.6968 |
| 0.5972 | 13.0 | 4056 | 1.1724 | 0.4477 |
| 0.5972 | 14.0 | 4368 | 0.6682 | 0.6101 |
| 0.7875 | 15.0 | 4680 | 0.6779 | 0.5560 |
| 0.7875 | 16.0 | 4992 | 0.9667 | 0.6354 |
| 0.6467 | 17.0 | 5304 | 0.9092 | 0.6606 |
| 0.5892 | 18.0 | 5616 | 0.6701 | 0.4621 |
| 0.5892 | 19.0 | 5928 | 0.6021 | 0.6643 |
| 0.6056 | 20.0 | 6240 | 0.8808 | 0.6787 |
| 0.5409 | 21.0 | 6552 | 0.5458 | 0.6751 |
| 0.5409 | 22.0 | 6864 | 0.5723 | 0.6859 |
| 0.5387 | 23.0 | 7176 | 0.9638 | 0.6679 |
| 0.5387 | 24.0 | 7488 | 0.7176 | 0.6968 |
| 0.511 | 25.0 | 7800 | 0.6557 | 0.6895 |
| 0.4744 | 26.0 | 8112 | 0.5338 | 0.7148 |
| 0.4744 | 27.0 | 8424 | 0.5646 | 0.7076 |
| 0.4743 | 28.0 | 8736 | 0.5423 | 0.7040 |
| 0.4598 | 29.0 | 9048 | 0.6324 | 0.7076 |
| 0.4598 | 30.0 | 9360 | 0.7069 | 0.7004 |
| 0.4485 | 31.0 | 9672 | 0.6809 | 0.6859 |
| 0.4485 | 32.0 | 9984 | 0.5675 | 0.7076 |
| 0.442 | 33.0 | 10296 | 0.8006 | 0.6895 |
| 0.4141 | 34.0 | 10608 | 0.5902 | 0.7112 |
| 0.4141 | 35.0 | 10920 | 0.6252 | 0.7148 |
| 0.4054 | 36.0 | 11232 | 0.8398 | 0.7112 |
| 0.3819 | 37.0 | 11544 | 0.7482 | 0.7004 |
| 0.3819 | 38.0 | 11856 | 0.6538 | 0.7112 |
| 0.3825 | 39.0 | 12168 | 0.7720 | 0.6968 |
| 0.3825 | 40.0 | 12480 | 0.6094 | 0.6931 |
| 0.379 | 41.0 | 12792 | 0.5863 | 0.7040 |
| 0.3701 | 42.0 | 13104 | 0.6197 | 0.7040 |
| 0.3701 | 43.0 | 13416 | 0.5795 | 0.7112 |
| 0.3576 | 44.0 | 13728 | 0.6484 | 0.7076 |
| 0.3454 | 45.0 | 14040 | 0.6623 | 0.6968 |
| 0.3454 | 46.0 | 14352 | 0.6562 | 0.7220 |
| 0.3455 | 47.0 | 14664 | 0.5921 | 0.7184 |
| 0.3455 | 48.0 | 14976 | 0.6980 | 0.7112 |
| 0.3344 | 49.0 | 15288 | 0.6210 | 0.7004 |
| 0.3285 | 50.0 | 15600 | 0.5674 | 0.7184 |
| 0.3285 | 51.0 | 15912 | 0.6134 | 0.7040 |
| 0.3295 | 52.0 | 16224 | 0.7118 | 0.7148 |
| 0.3181 | 53.0 | 16536 | 0.6978 | 0.7040 |
| 0.3181 | 54.0 | 16848 | 0.6851 | 0.7112 |
| 0.3021 | 55.0 | 17160 | 0.7702 | 0.7040 |
| 0.3021 | 56.0 | 17472 | 0.7319 | 0.7040 |
| 0.3044 | 57.0 | 17784 | 0.6459 | 0.7076 |
| 0.2938 | 58.0 | 18096 | 0.6386 | 0.7076 |
| 0.2938 | 59.0 | 18408 | 0.6550 | 0.7004 |
| 0.2991 | 60.0 | 18720 | 0.6742 | 0.7076 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dkqjrm/20230824064444
|
dkqjrm
| 2023-08-23T23:38:44Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-23T21:45:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230824064444'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230824064444
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0709
- Accuracy: 0.7329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 312 | 0.4733 | 0.5307 |
| 0.3538 | 2.0 | 624 | 0.1917 | 0.5126 |
| 0.3538 | 3.0 | 936 | 0.1696 | 0.5560 |
| 0.2775 | 4.0 | 1248 | 0.1700 | 0.5271 |
| 0.2538 | 5.0 | 1560 | 0.3497 | 0.5343 |
| 0.2538 | 6.0 | 1872 | 0.2183 | 0.5632 |
| 0.259 | 7.0 | 2184 | 0.1783 | 0.5018 |
| 0.259 | 8.0 | 2496 | 0.2321 | 0.5848 |
| 0.2587 | 9.0 | 2808 | 0.2081 | 0.6101 |
| 0.2211 | 10.0 | 3120 | 0.1194 | 0.6715 |
| 0.2211 | 11.0 | 3432 | 0.1505 | 0.6390 |
| 0.198 | 12.0 | 3744 | 0.1130 | 0.7004 |
| 0.1939 | 13.0 | 4056 | 0.1187 | 0.6679 |
| 0.1939 | 14.0 | 4368 | 0.1175 | 0.6787 |
| 0.1687 | 15.0 | 4680 | 0.1092 | 0.7040 |
| 0.1687 | 16.0 | 4992 | 0.0984 | 0.7076 |
| 0.1511 | 17.0 | 5304 | 0.1032 | 0.7076 |
| 0.1448 | 18.0 | 5616 | 0.1024 | 0.7401 |
| 0.1448 | 19.0 | 5928 | 0.0902 | 0.7112 |
| 0.1392 | 20.0 | 6240 | 0.0972 | 0.7112 |
| 0.1283 | 21.0 | 6552 | 0.0880 | 0.7184 |
| 0.1283 | 22.0 | 6864 | 0.0892 | 0.7329 |
| 0.1257 | 23.0 | 7176 | 0.1156 | 0.7401 |
| 0.1257 | 24.0 | 7488 | 0.0940 | 0.7329 |
| 0.1215 | 25.0 | 7800 | 0.0876 | 0.7401 |
| 0.1184 | 26.0 | 8112 | 0.1289 | 0.7437 |
| 0.1184 | 27.0 | 8424 | 0.0808 | 0.7256 |
| 0.1112 | 28.0 | 8736 | 0.0823 | 0.7401 |
| 0.1139 | 29.0 | 9048 | 0.0838 | 0.7256 |
| 0.1139 | 30.0 | 9360 | 0.0855 | 0.7220 |
| 0.1095 | 31.0 | 9672 | 0.0813 | 0.7256 |
| 0.1095 | 32.0 | 9984 | 0.0765 | 0.7256 |
| 0.106 | 33.0 | 10296 | 0.0847 | 0.7365 |
| 0.1034 | 34.0 | 10608 | 0.0844 | 0.7509 |
| 0.1034 | 35.0 | 10920 | 0.0811 | 0.7184 |
| 0.0991 | 36.0 | 11232 | 0.0811 | 0.7292 |
| 0.0938 | 37.0 | 11544 | 0.0847 | 0.7365 |
| 0.0938 | 38.0 | 11856 | 0.0824 | 0.7256 |
| 0.0973 | 39.0 | 12168 | 0.0760 | 0.7292 |
| 0.0973 | 40.0 | 12480 | 0.0786 | 0.7220 |
| 0.0908 | 41.0 | 12792 | 0.0732 | 0.7473 |
| 0.0894 | 42.0 | 13104 | 0.0763 | 0.7401 |
| 0.0894 | 43.0 | 13416 | 0.0811 | 0.7365 |
| 0.0896 | 44.0 | 13728 | 0.0734 | 0.7473 |
| 0.0882 | 45.0 | 14040 | 0.0747 | 0.7329 |
| 0.0882 | 46.0 | 14352 | 0.0729 | 0.7401 |
| 0.0847 | 47.0 | 14664 | 0.0723 | 0.7329 |
| 0.0847 | 48.0 | 14976 | 0.0748 | 0.7401 |
| 0.0854 | 49.0 | 15288 | 0.0755 | 0.7292 |
| 0.0813 | 50.0 | 15600 | 0.0715 | 0.7329 |
| 0.0813 | 51.0 | 15912 | 0.0719 | 0.7292 |
| 0.0845 | 52.0 | 16224 | 0.0721 | 0.7401 |
| 0.0821 | 53.0 | 16536 | 0.0711 | 0.7292 |
| 0.0821 | 54.0 | 16848 | 0.0714 | 0.7437 |
| 0.0802 | 55.0 | 17160 | 0.0711 | 0.7401 |
| 0.0802 | 56.0 | 17472 | 0.0718 | 0.7329 |
| 0.0798 | 57.0 | 17784 | 0.0708 | 0.7220 |
| 0.0796 | 58.0 | 18096 | 0.0715 | 0.7365 |
| 0.0796 | 59.0 | 18408 | 0.0712 | 0.7329 |
| 0.0806 | 60.0 | 18720 | 0.0709 | 0.7329 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
yellowsproket/trailer_small_model
|
yellowsproket
| 2023-08-23T23:24:08Z | 29 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-23T23:17:12Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of trailers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - yellowsproket/trailer_small_model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of trailers using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Aladin77/ppo-LunarLander-v2
|
Aladin77
| 2023-08-23T23:23:40Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-23T23:23:20Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.58 +/- 17.88
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NobodyExistsOnTheInternet/convenience2epochs
|
NobodyExistsOnTheInternet
| 2023-08-23T23:22:33Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-23T23:21:27Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
tenkomati/dqn-SpaceInvaderstest
|
tenkomati
| 2023-08-23T23:07:59Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-23T23:07:18Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 652.00 +/- 219.36
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga tenkomati -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga tenkomati -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga tenkomati
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
dimitarrskv/rl-CartPole-v1
|
dimitarrskv
| 2023-08-23T22:59:05Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-10T14:38:05Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: rl-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 662.20 +/- 176.98
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ghze/dqn-SpaceInvadersNoFrameskip-v4
|
ghze
| 2023-08-23T22:53:37Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-23T22:52:57Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 573.50 +/- 132.53
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ghze -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ghze -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ghze
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ardt-multipart/ardt-multipart-combo_train_walker2d_v2-2308_2138-66
|
ardt-multipart
| 2023-08-23T22:28:27Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-23T20:40:04Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-multipart-combo_train_walker2d_v2-2308_2138-66
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-multipart-combo_train_walker2d_v2-2308_2138-66
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Jinpkk/ITproject_version3
|
Jinpkk
| 2023-08-23T22:25:18Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"base_model:Jinpkk/ITproject_version1",
"base_model:finetune:Jinpkk/ITproject_version1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-23T22:23:07Z |
---
license: mit
base_model: Jinpkk/ITproject_version1
tags:
- generated_from_keras_callback
model-index:
- name: Jinpkk/ITproject_version3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jinpkk/ITproject_version3
This model is a fine-tuned version of [Jinpkk/ITproject_version1](https://huggingface.co/Jinpkk/ITproject_version1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1914
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -809, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.7802 | 0 |
| 1.1914 | 1 |
### Framework versions
- Transformers 4.32.0
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
daripaez/a2c-PandaReachDense-v2
|
daripaez
| 2023-08-23T22:22:15Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-24T14:21:02Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.78 +/- 0.16
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
|
sabre-code/distilbert-base-uncased-finetuned-emotion
|
sabre-code
| 2023-08-23T22:19:49Z | 121 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:dair-ai/emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-23T20:23:59Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- dair-ai/emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
language:
- en
metrics:
- accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
zarakiquemparte/zarablend-mx-l2-7b
|
zarakiquemparte
| 2023-08-23T22:11:01Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-21T16:11:54Z |
---
license: other
tags:
- llama2
---
# Model Card: Zarablend MX L2 7b
This model uses [Nous Hermes Llama2 7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) (53%) as a base with [Airoboros L2 7B GPT4 m2.0](https://huggingface.co/jondurbin/airoboros-l2-7b-gpt4-m2.0) (47%) and the result of this merge was merged with [LimaRP LLama2 7B Lora](https://huggingface.co/lemonilia/limarp-llama2).
This merge of models(hermes and airoboros) was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/merge-cli.py)
This merge of Lora with Model was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/apply-lora.py)
Quantized Model by @TheBloke:
- [GGML](https://huggingface.co/TheBloke/Zarablend-MX-L2-7B-GGML)
- [GPTQ](https://huggingface.co/TheBloke/Zarablend-MX-L2-7B-GPTQ)
Merge illustration:

## Usage:
Since this is a merge between Nous Hermes, Airoboros and LimaRP, the following instruction formats should work:
Alpaca 2:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
LimaRP instruction format:
```
<<SYSTEM>>
<character card and system prompt>
<<USER>>
<prompt>
<<AIBOT>>
<leave a newline blank for model to respond>
```
## Bias, Risks, and Limitations
This model is not intended for supplying factual information or advice in any form
## Training Details
This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details.
|
MiguelCB/results
|
MiguelCB
| 2023-08-23T22:09:31Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:garage-bAInd/Stable-Platypus2-13B",
"base_model:finetune:garage-bAInd/Stable-Platypus2-13B",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-08-22T18:38:20Z |
---
license: cc-by-nc-sa-4.0
base_model: garage-bAInd/Stable-Platypus2-13B
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [garage-bAInd/Stable-Platypus2-13B](https://huggingface.co/garage-bAInd/Stable-Platypus2-13B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
redstonehero/anythingqingmix25d_v30
|
redstonehero
| 2023-08-23T22:07:47Z | 29 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-23T21:10:44Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
aliakyurek/a2c-PandaReachDense-v2
|
aliakyurek
| 2023-08-23T22:05:32Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-24T11:18:50Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.05 +/- 0.23
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
|
Geo/bert-base-multilingual-cased-fine-tuned-intent-classification
|
Geo
| 2023-08-23T22:05:04Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-23T20:48:15Z |
---
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
model-index:
- name: bert-base-multilingual-cased-fine-tuned-intent-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-fine-tuned-intent-classification
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
redstonehero/furryvixens_v20bakedvae
|
redstonehero
| 2023-08-23T21:42:47Z | 29 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-23T20:44:31Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
felipebandeira/donutlicenses3v3
|
felipebandeira
| 2023-08-23T21:40:06Z | 114 | 4 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"image-to-text",
"en",
"dataset:felipebandeira/driverlicenses2k",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2023-08-16T12:35:01Z |
---
license: mit
datasets:
- felipebandeira/driverlicenses2k
language:
- en
metrics:
- accuracy
pipeline_tag: image-to-text
---
This model extracts information from EU driver's licenses and returns it as JSON. For optimal performance, we recommend that input images:
- have a size of 1192x772
- have high resolution and do not contain light reflection effects
Accuracy
- on validation set: 98%
- on set of real licenses: 63.93%
Article describing model:
https://medium.com/@ofelipebandeira/transformers-vs-ocr-who-can-read-better-192e6b044dd3
Article describing synthetic dataset used in training:
https://python.plainenglish.io/how-to-create-synthetic-datasets-of-document-images-5f140dee5e40
|
redstonehero/fcanimemix_v30
|
redstonehero
| 2023-08-23T21:36:09Z | 29 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-23T20:45:06Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
redstonehero/frozenanimation_v10
|
redstonehero
| 2023-08-23T21:36:07Z | 30 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-23T20:44:15Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
him1411/EDGAR-BART-Base
|
him1411
| 2023-08-23T21:35:55Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"en",
"dataset:him1411/EDGAR10-Q",
"arxiv:2109.08079",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-04-03T18:32:38Z |
---
license: mit
datasets:
- him1411/EDGAR10-Q
language:
- en
metrics:
- rouge
---
license: mit
language:
- en
tags:
- finance
- ContextNER
- language models
datasets:
- him1411/EDGAR10-Q
metrics:
- rouge
---
EDGAR-BART-Base
=============
BART base model finetuned on [EDGAR10-Q dataset](https://huggingface.co/datasets/him1411/EDGAR10-Q)
You may want to check out
* Our paper: [CONTEXT-NER: Contextual Phrase Generation at Scale](https://arxiv.org/abs/2109.08079/)
* GitHub: [Click Here](https://github.com/him1411/edgar10q-dataset)
Direct Use
=============
It is possible to use this model to generate text, which is useful for experimentation and understanding its capabilities. **It should not be directly used for production or work that may directly impact people.**
How to Use
=============
You can very easily load the models with Transformers, instead of downloading them manually. The [bart-base model](https://huggingface.co/facebook/bart-base) is the backbone of our model. Here is how to use the model in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("him1411/EDGAR-BART-Base")
model = AutoModelForSeq2SeqLM.from_pretrained("him1411/EDGAR-BART-Base")
```
Or just clone the model repo
```
git lfs install
git clone https://huggingface.co/him1411/EDGAR-BART-Base
```
Inference Example
=============
Here, we provide an example for the "ContextNER" task. Below is an example of one instance.
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("him1411/EDGAR-BART-Base")
model = AutoModelForSeq2SeqLM.from_pretrained("him1411/EDGAR-BART-Base")
# Input shows how we have appended instruction from our file for HoC dataset with instance.
input = "14.5 years . The definite lived intangible assets related to the contracts and trade names had estimated weighted average useful lives of 5.9 years and 14.5 years, respectively, at acquisition."
tokenized_input= tokenizer(input)
# Ideal output for this input is 'Definite lived intangible assets weighted average remaining useful life'
output = model(tokenized_input)
```
BibTeX Entry and Citation Info
===============
If you are using our model, please cite our paper:
```bibtex
@article{gupta2021context,
title={Context-NER: Contextual Phrase Generation at Scale},
author={Gupta, Himanshu and Verma, Shreyas and Kumar, Tarun and Mishra, Swaroop and Agrawal, Tamanna and Badugu, Amogh and Bhatt, Himanshu Sharad},
journal={arXiv preprint arXiv:2109.08079},
year={2021}
}
```
|
dkqjrm/20230824043537
|
dkqjrm
| 2023-08-23T21:35:02Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-23T19:35:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230824043537'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230824043537
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3141
- Accuracy: 0.7401
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7925 | 1.0 | 623 | 0.8673 | 0.4729 |
| 0.6122 | 2.0 | 1246 | 0.4006 | 0.5415 |
| 0.5656 | 3.0 | 1869 | 1.2100 | 0.4729 |
| 0.5981 | 4.0 | 2492 | 0.4232 | 0.5632 |
| 0.5284 | 5.0 | 3115 | 0.6388 | 0.5523 |
| 0.6128 | 6.0 | 3738 | 0.4463 | 0.5307 |
| 0.4769 | 7.0 | 4361 | 0.4020 | 0.6065 |
| 0.4415 | 8.0 | 4984 | 0.3773 | 0.6029 |
| 0.4284 | 9.0 | 5607 | 0.3718 | 0.6679 |
| 0.3893 | 10.0 | 6230 | 0.3479 | 0.6606 |
| 0.3707 | 11.0 | 6853 | 0.3415 | 0.6751 |
| 0.3845 | 12.0 | 7476 | 0.3645 | 0.6787 |
| 0.3667 | 13.0 | 8099 | 0.3591 | 0.6895 |
| 0.3674 | 14.0 | 8722 | 0.3526 | 0.6931 |
| 0.3561 | 15.0 | 9345 | 0.3187 | 0.7292 |
| 0.342 | 16.0 | 9968 | 0.3318 | 0.7004 |
| 0.3305 | 17.0 | 10591 | 0.3185 | 0.7004 |
| 0.3269 | 18.0 | 11214 | 0.3733 | 0.6534 |
| 0.3341 | 19.0 | 11837 | 0.3197 | 0.7040 |
| 0.3214 | 20.0 | 12460 | 0.3166 | 0.7148 |
| 0.3109 | 21.0 | 13083 | 0.3257 | 0.7148 |
| 0.3125 | 22.0 | 13706 | 0.3299 | 0.7292 |
| 0.3097 | 23.0 | 14329 | 0.4120 | 0.6895 |
| 0.2918 | 24.0 | 14952 | 0.3158 | 0.7148 |
| 0.2792 | 25.0 | 15575 | 0.3077 | 0.7256 |
| 0.2766 | 26.0 | 16198 | 0.3078 | 0.7292 |
| 0.2811 | 27.0 | 16821 | 0.3033 | 0.7256 |
| 0.2719 | 28.0 | 17444 | 0.3017 | 0.7148 |
| 0.2661 | 29.0 | 18067 | 0.2947 | 0.7184 |
| 0.263 | 30.0 | 18690 | 0.3416 | 0.7329 |
| 0.2633 | 31.0 | 19313 | 0.3170 | 0.7256 |
| 0.2517 | 32.0 | 19936 | 0.3063 | 0.7220 |
| 0.2486 | 33.0 | 20559 | 0.3137 | 0.7256 |
| 0.252 | 34.0 | 21182 | 0.3118 | 0.7256 |
| 0.2396 | 35.0 | 21805 | 0.2980 | 0.7220 |
| 0.2471 | 36.0 | 22428 | 0.3050 | 0.7329 |
| 0.2361 | 37.0 | 23051 | 0.3366 | 0.7220 |
| 0.2358 | 38.0 | 23674 | 0.3080 | 0.7473 |
| 0.2231 | 39.0 | 24297 | 0.3191 | 0.7437 |
| 0.2298 | 40.0 | 24920 | 0.3018 | 0.7148 |
| 0.2241 | 41.0 | 25543 | 0.3090 | 0.7401 |
| 0.2243 | 42.0 | 26166 | 0.3137 | 0.7401 |
| 0.2237 | 43.0 | 26789 | 0.3277 | 0.7365 |
| 0.2147 | 44.0 | 27412 | 0.3116 | 0.7437 |
| 0.2149 | 45.0 | 28035 | 0.3289 | 0.7365 |
| 0.2087 | 46.0 | 28658 | 0.3241 | 0.7292 |
| 0.21 | 47.0 | 29281 | 0.3060 | 0.7365 |
| 0.214 | 48.0 | 29904 | 0.3311 | 0.7329 |
| 0.2108 | 49.0 | 30527 | 0.3144 | 0.7437 |
| 0.2029 | 50.0 | 31150 | 0.3094 | 0.7401 |
| 0.2028 | 51.0 | 31773 | 0.3141 | 0.7473 |
| 0.2018 | 52.0 | 32396 | 0.3188 | 0.7437 |
| 0.2079 | 53.0 | 33019 | 0.3138 | 0.7365 |
| 0.1982 | 54.0 | 33642 | 0.3109 | 0.7401 |
| 0.1926 | 55.0 | 34265 | 0.3118 | 0.7437 |
| 0.1972 | 56.0 | 34888 | 0.3270 | 0.7401 |
| 0.1986 | 57.0 | 35511 | 0.3098 | 0.7365 |
| 0.1928 | 58.0 | 36134 | 0.3131 | 0.7401 |
| 0.1974 | 59.0 | 36757 | 0.3132 | 0.7401 |
| 0.1927 | 60.0 | 37380 | 0.3141 | 0.7401 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
JJinBBangMan/distilbert-base-uncased-finetuned-imdb
|
JJinBBangMan
| 2023-08-23T21:34:32Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-08-23T21:25:42Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4142
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7015 | 1.0 | 157 | 2.4981 |
| 2.5816 | 2.0 | 314 | 2.4282 |
| 2.5366 | 3.0 | 471 | 2.4515 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.