modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-01 00:47:04
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 530
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-01 00:46:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
pfunk/CartPole-v1-DQPN_freq_150-seed1
|
pfunk
| 2023-03-18T14:11:08Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T14:11:05Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 495.60 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_150.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_freq_150]"
python -m cleanrl_utils.enjoy --exp-name DQPN_freq_150 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_150-seed1/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_150-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_150-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_150 --policy-network-frequency 150 --seed 1
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'DQPN_freq_150',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 150,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-DQPN_freq_50-seed2
|
pfunk
| 2023-03-18T14:09:56Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T14:09:52Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 26.97 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_50.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_freq_50]"
python -m cleanrl_utils.enjoy --exp-name DQPN_freq_50 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_50-seed2/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_50-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_50-seed2/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_50 --policy-network-frequency 50 --seed 2
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'DQPN_freq_50',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 50,
'policy_tau': 1.0,
'save_model': True,
'seed': 2,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-DQPN_freq_50-seed1
|
pfunk
| 2023-03-18T14:09:55Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T14:09:52Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 74.65 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_50.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_freq_50]"
python -m cleanrl_utils.enjoy --exp-name DQPN_freq_50 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_50-seed1/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_50-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_50-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_50 --policy-network-frequency 50 --seed 1
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'DQPN_freq_50',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 50,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-DQPN_freq_50-seed4
|
pfunk
| 2023-03-18T14:09:54Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T14:09:51Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 498.88 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_50.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_freq_50]"
python -m cleanrl_utils.enjoy --exp-name DQPN_freq_50 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_50-seed4/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_50-seed4/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_50-seed4/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_50 --policy-network-frequency 50 --seed 4
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'DQPN_freq_50',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 50,
'policy_tau': 1.0,
'save_model': True,
'seed': 4,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-DQPN_freq_100-seed4
|
pfunk
| 2023-03-18T14:08:22Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T14:08:19Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 55.21 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_100.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_freq_100]"
python -m cleanrl_utils.enjoy --exp-name DQPN_freq_100 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_100-seed4/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_100-seed4/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_100-seed4/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_100 --policy-network-frequency 100 --seed 4
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'DQPN_freq_100',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 100,
'policy_tau': 1.0,
'save_model': True,
'seed': 4,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-DQPN_freq_100-seed3
|
pfunk
| 2023-03-18T14:08:13Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T14:08:10Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_100.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_freq_100]"
python -m cleanrl_utils.enjoy --exp-name DQPN_freq_100 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_100-seed3/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_100-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_100-seed3/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_100 --policy-network-frequency 100 --seed 3
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'DQPN_freq_100',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 100,
'policy_tau': 1.0,
'save_model': True,
'seed': 3,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-DQPN_freq_100-seed1
|
pfunk
| 2023-03-18T14:07:44Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T14:07:41Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_100.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_freq_100]"
python -m cleanrl_utils.enjoy --exp-name DQPN_freq_100 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_100-seed1/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_100-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_100-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_100 --policy-network-frequency 100 --seed 1
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'DQPN_freq_100',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 100,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
Jackmin108/a2c-PandaReachDense-v2
|
Jackmin108
| 2023-03-18T14:03:34Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T07:53:39Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.59 +/- 0.50
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CVPR/DualStyleGAN
|
CVPR
| 2023-03-18T13:53:08Z | 0 | 11 |
pytorch
|
[
"pytorch",
"style-transfer",
"face-stylization",
"dataset:cartoon",
"dataset:caricature",
"dataset:anime",
"dataset:pixar",
"dataset:slamdunk",
"dataset:arcane",
"dataset:comic",
"arxiv:2203.13248",
"license:mit",
"region:us"
] | null | 2022-06-12T13:29:24Z |
---
license: mit
library_name: pytorch
tags:
- style-transfer
- face-stylization
datasets:
- cartoon
- caricature
- anime
- pixar
- slamdunk
- arcane
- comic
---
## Model Details
This system provides a web demo for the following paper:
**Pastiche Master: Exemplar-Based High-Resolution Portrait Style Transfer (CVPR 2022)**
- Algorithm developed by: Shuai Yang, Liming Jiang, Ziwei Liu and Chen Change Loy
- Web demo developed by: [hysts](https://huggingface.co/hysts)
- Resources for more information:
- [Project Page](https://www.mmlab-ntu.com/project/dualstylegan/)
- [Research Paper](https://arxiv.org/abs/2203.13248)
- [GitHub Repo](https://github.com/williamyang1991/DualStyleGAN)
**Abstract**
> Recent studies on StyleGAN show high performance on artistic portrait generation by transfer learning with limited data. In this paper, we explore more challenging exemplar-based high-resolution portrait style transfer by introducing a novel DualStyleGAN with flexible control of dual styles of the original face domain and the extended artistic portrait domain. Different from StyleGAN, DualStyleGAN provides a natural way of style transfer by characterizing the content and style of a portrait with an intrinsic style path and a new extrinsic style path, respectively. The delicately designed extrinsic style path enables our model to modulate both the color and complex structural styles hierarchically to precisely pastiche the style example. Furthermore, a novel progressive fine-tuning scheme is introduced to smoothly transform the generative space of the model to the target domain, even with the above modifications on the network architecture. Experiments demonstrate the superiority of DualStyleGAN over state-of-the-art methods in high-quality portrait style transfer and flexible style control.
## Citation Information
```bibtex
@inproceedings{yang2022Pastiche,
author = {Yang, Shuai and Jiang, Liming and Liu, Ziwei and and Loy, Chen Change},
title = {Pastiche Master: Exemplar-Based High-Resolution Portrait Style Transfer},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2022}
}
```
|
coreml-community/coreml-ModernArtStyle-v10
|
coreml-community
| 2023-03-18T13:43:12Z | 0 | 3 | null |
[
"coreml",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-03-18T05:29:49Z |
---
license: creativeml-openrail-m
tags:
- coreml
- stable-diffusion
- text-to-image
---
# Core ML Converted Model:
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).<br>
- Provide the model to an app such as Mochi Diffusion [Github](https://github.com/godly-devotion/MochiDiffusion) - [Discord](https://discord.gg/x2kartzxGv) to generate images.<br>
- `split_einsum` version is compatible with all compute unit options including Neural Engine.<br>
- `original` version is only compatible with CPU & GPU option.<br>
- Custom resolution versions are tagged accordingly.<br>
- The `vae-ft-mse-840000-ema-pruned.ckpt` vae is embedded into the model.<br>
- Descriptions are posted as-is from original model source. Not all features and/or results may be available in CoreML format.<br>
- This model was converted with `vae-encoder` for i2i.<br>
- This model is fp16.<br>
- This model does not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).<br>
- This model does not include a safety checker (for NSFW content)<br>
# ModernArtStyle-v10:
Source(s): [Hugging Face](https://huggingface.co/theintuitiveye/modernartstyle) - [CivitAI](https://civitai.com/models/3519/modernartstyle)
You can use this model to generate modernart style images.
## Dataset
~100 modern art images.
## Usage
Use stability ai VAE for better results.
For majority of prompts trigger phrase is not required; use *"modernartst"* to force the style
*samples*

Help us to be able to create models of professional standards. Consider supporting us on [Patreon](https://www.patreon.com/intuitiveai) / [Ko-fi](https://ko-fi.com/intuitiveai) / [Paypal](https://www.paypal.com/paypalme/theintuitiveye)
## *Demo*
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run ModernArt Diffusion :
[](https://huggingface.co/spaces/theintuitiveye/modernartstyle)
## *License*
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies :
- You can't use the model to deliberately produce nor share illegal or harmful outputs or content
- The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
- You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license [here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
takinai/Tifa_meenow
|
takinai
| 2023-03-18T13:35:01Z | 0 | 2 | null |
[
"stable_diffusion",
"lora",
"region:us"
] | null | 2023-03-17T18:11:04Z |
---
tags:
- stable_diffusion
- lora
---
The source of the models is listed below. Please check the original licenses from the source.
https://civitai.com/models/11367
|
Feldi/ppoSelf-LunarLender-v2
|
Feldi
| 2023-03-18T13:31:49Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T13:31:42Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -106.81 +/- 68.67
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Feldi/ppoSelf-LunarLender-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
takinai/SamDoesArts_Sam_Yang_Style_LoRA
|
takinai
| 2023-03-18T13:23:52Z | 0 | 4 | null |
[
"stable_diffusion",
"lora",
"region:us"
] | null | 2023-03-18T13:19:05Z |
---
tags:
- stable_diffusion
- lora
---
The source of the models is listed below. Please check the original licenses from the source.
https://civitai.com/models/6638
|
marinone94/whisper-medium-nordic
|
marinone94
| 2023-03-18T13:23:18Z | 89 | 2 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"hf-asr-leaderboard",
"sv",
"no",
"da",
"multilingual",
"dataset:mozilla-foundation/common_voice_11_0",
"dataset:babelbox/babelbox_voice",
"dataset:NbAiLab/NST",
"dataset:NbAiLab/NPSC",
"dataset:google/fleurs",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-17T07:18:20Z |
---
language:
- sv
- 'no'
- da
- multilingual
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_11_0
- babelbox/babelbox_voice
- NbAiLab/NST
- NbAiLab/NPSC
- google/fleurs
metrics:
- wer
model-index:
- name: Whisper Medium Nordic
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: mozilla-foundation/common_voice_11_0
type: mozilla-foundation/common_voice_11_0
config: sv-SE
split: test
metrics:
- type: wer
value: 11.31
name: Wer
- type: wer
value: 14.86
name: Wer
- type: wer
value: 37.02
name: Wer
---
# Whisper Medium Nordic
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the [mozilla-foundation/common_voice_11_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0) (sv-SE, da, nn-NO), the [babelbox/babelbox_voice](https://huggingface.co/datasets/babelbox/babelbox_voice) (Swedish radio), the [NbAiLab/NST](https://huggingface.co/datasets/NbAiLab/NST) (Norwegian radio), the [NbAiLab/NPSC](https://huggingface.co/datasets/NbAiLab/NPSC) (Norwegian parliament) and the [google/fleurs](https://huggingface.co/datasets/google/fleurs) (sv_se, da_dk, nb_no) datasets. The goal is to leverage transfer learning across Nordic languages, which have strong similarities.
It achieves the following results on the common voice Swedish test set:
- Loss: 0.2129
- Wer: 11.3079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
Please note that a bug during training prevented us from evaluating WER correctly.
Validation loss suggests we started overfitting after 5000/6000 steps.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:--------:|:---------------:|:-----------:|
| 0.3056 | 0.1 | 1000 | 0.2670 | ~~99.9221~~ |
| 0.16 | 0.2 | 2000 | 0.2322 | ~~99.6640~~ |
| 0.1309 | 0.3 | 3000 | 0.2152 | ~~98.9759~~ |
| 0.097 | 0.4 | 4000 | 0.2112 | ~~100.0~~ |
| **0.091** | **0.5** | **5000** | **0.2094** | ~~99.7312~~ |
| 0.1098 | 0.6 | 6000 | 0.2098 | ~~98.6077~~ |
| 0.0637 | 0.7 | 7000 | 0.2148 | ~~98.4625~~ |
| 0.0718 | 0.8 | 8000 | 0.2151 | ~~99.8710~~ |
| 0.0517 | 0.9 | 9000 | 0.2175 | ~~97.2342~~ |
| 0.0465 | 1.0 | 10000 | 0.2129 | ~~96.3552~~ |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
### WandB run
https://wandb.ai/pn-aa/whisper/runs/xc70fbwv?workspace=user-emilio_marinone
### Baseline model
This model finetuned whisper-medium, and here we can observe imrpovements when evaluated on CommonVoice 11 Swedish(sv-SE), Danish(da), and Norwegian (nn-NO) test splits.
| Language | Whisper Medium (WER) | Whisper Medium Nordic (WER) |
|:--------:|:--------------------:|:---------------------------:|
| sv-SE | 14.93 | 11.31 |
| da | 20.85 | 14.86 |
| nn-NO | 50.82 | 37.02
|
MikolajDeja/alirezamsh-small100-pl-en-yhavinga-ccmatrix-finetune
|
MikolajDeja
| 2023-03-18T13:20:31Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"dataset:ccmatrix",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-04T12:07:01Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- ccmatrix
model-index:
- name: alirezamsh-small100-pl-en-yhavinga-ccmatrix-finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# alirezamsh-small100-pl-en-yhavinga-ccmatrix-finetune
This model is a fine-tuned version of [alirezamsh/small100](https://huggingface.co/alirezamsh/small100) on the ccmatrix dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 65
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1
- Datasets 2.10.1
- Tokenizers 0.13.2
|
aiartwork/rl_course_vizdoom_health_gathering_supreme
|
aiartwork
| 2023-03-18T12:57:41Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T12:57:13Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.75 +/- 4.54
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r aiartwork/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
britojr/Reinforce-CartPole-v1
|
britojr
| 2023-03-18T12:56:24Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T12:56:11Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
LarryAIDraw/kmsBlCherHighAltitudeHead_releaseV30
|
LarryAIDraw
| 2023-03-18T12:55:22Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-18T12:54:38Z |
---
license: creativeml-openrail-m
---
|
ZhouZX/rare-puppers
|
ZhouZX
| 2023-03-18T12:43:57Z | 224 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-03-18T12:43:44Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8636363744735718
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu

|
Perse90/q-FrozenLake-v1-4x4-noSlippery
|
Perse90
| 2023-03-18T12:26:38Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T12:26:32Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Perse90/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bankholdup/rugpt3_song_writer
|
bankholdup
| 2023-03-18T12:11:07Z | 143 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"PyTorch",
"Transformers",
"ru",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- ru
tags:
- PyTorch
- Transformers
widget:
- text: "Батя возвращается трезвый, в руке буханка"
example_title: "Example 1"
- text: "Как дела? Как дела? Это новый кадиллак"
example_title: "Example 2"
- text: "4:20 на часах и я дрочу на твоё фото"
example_title: "Example 3"
inference:
parameters:
temperature: 0.9
k: 50
p: 0.95
length: 1500
---
Model based on [ruGPT-3](https://huggingface.co/sberbank-ai/rugpt3small_based_on_gpt2) for generating songs.
Tuned on lyrics collected from [genius](https://genius.com/).
Examples of used artists:
* [Oxxxymiron](https://genius.com/artists/Oxxxymiron)
* [Моргенштерн](https://genius.com/artists/Morgenshtern)
* [ЛСП](https://genius.com/artists/Lsp)
* [Гражданская оборона](https://genius.com/artists/Civil-defense)
* [Король и Шут](https://genius.com/artists/The-king-and-the-jester)
* etc
|
samwit/bloompaca-7b1-lora
|
samwit
| 2023-03-18T12:11:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-03-18T12:06:52Z |
This is a LoRa finetuning of Bloom-7b1 using the Alpaca instruction dataset.
It really highlights how the Bloom models are undertrained with ~400M tokens as opposed to 1 Trillion in the smaller LLaMa models.
|
heziyevv/dqn-SpaceInvadersNoFrameskip-v4
|
heziyevv
| 2023-03-18T12:11:04Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T12:10:20Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 668.50 +/- 227.73
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga heziyevv -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga heziyevv -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga heziyevv
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
aiartwork/unit1-ppo-LunarLander-v2
|
aiartwork
| 2023-03-18T11:58:43Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T11:58:23Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 245.74 +/- 19.56
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
megrxu/pokemon-lora
|
megrxu
| 2023-03-18T11:56:26Z | 2 | 2 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-03-18T07:48:15Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - https://huggingface.co/megrxu/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
matthv/second_t5-end2end-questions-generation
|
matthv
| 2023-03-18T11:51:54Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-18T11:36:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: second_t5-end2end-questions-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# second_t5-end2end-questions-generation
This model is a fine-tuned version of [ThomasSimonini/t5-end2end-question-generation](https://huggingface.co/ThomasSimonini/t5-end2end-question-generation) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
### Framework versions
- Transformers 4.27.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
cthiriet/ppo2-LunarLander-v2
|
cthiriet
| 2023-03-18T11:21:28Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T11:14:12Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -146.89 +/- 80.73
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 100000
'learning_rate': 0.005
'num_envs': 10
'num_steps': 2048
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'clemdev2000/ppo2-LunarLander-v2'
'batch_size': 20480
'minibatch_size': 5120}
```
|
vocabtrimmer/mt5-small-trimmed-it-5000-itquad-qg
|
vocabtrimmer
| 2023-03-18T11:17:59Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question generation",
"it",
"dataset:lmqg/qg_itquad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-18T11:17:30Z |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: it
datasets:
- lmqg/qg_itquad
pipeline_tag: text2text-generation
tags:
- question generation
widget:
- text: "<hl> Dopo il 1971 <hl> , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento."
example_title: "Question Generation Example 1"
- text: "L' individuazione del petrolio e lo sviluppo di nuovi giacimenti richiedeva in genere <hl> da cinque a dieci anni <hl> prima di una produzione significativa."
example_title: "Question Generation Example 2"
- text: "il <hl> Giappone <hl> è stato il paese più dipendente dal petrolio arabo."
example_title: "Question Generation Example 3"
model-index:
- name: vocabtrimmer/mt5-small-trimmed-it-5000-itquad-qg
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_itquad
type: default
args: default
metrics:
- name: BLEU4 (Question Generation)
type: bleu4_question_generation
value: 6.94
- name: ROUGE-L (Question Generation)
type: rouge_l_question_generation
value: 21.07
- name: METEOR (Question Generation)
type: meteor_question_generation
value: 17.35
- name: BERTScore (Question Generation)
type: bertscore_question_generation
value: 80.39
- name: MoverScore (Question Generation)
type: moverscore_question_generation
value: 56.63
---
# Model Card of `vocabtrimmer/mt5-small-trimmed-it-5000-itquad-qg`
This model is fine-tuned version of [vocabtrimmer/mt5-small-trimmed-it-5000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-it-5000) for question generation task on the [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [vocabtrimmer/mt5-small-trimmed-it-5000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-it-5000)
- **Language:** it
- **Training data:** [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="it", model="vocabtrimmer/mt5-small-trimmed-it-5000-itquad-qg")
# model prediction
questions = model.generate_q(list_context="Dopo il 1971 , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.", list_answer="Dopo il 1971")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "vocabtrimmer/mt5-small-trimmed-it-5000-itquad-qg")
output = pipe("<hl> Dopo il 1971 <hl> , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-it-5000-itquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_itquad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:-----------------------------------------------------------------|
| BERTScore | 80.39 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| Bleu_1 | 21.98 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| Bleu_2 | 14.25 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| Bleu_3 | 9.79 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| Bleu_4 | 6.94 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| METEOR | 17.35 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| MoverScore | 56.63 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| ROUGE_L | 21.07 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_itquad
- dataset_name: default
- input_types: paragraph_answer
- output_types: question
- prefix_types: None
- model: vocabtrimmer/mt5-small-trimmed-it-5000
- max_length: 512
- max_length_output: 32
- epoch: 16
- batch: 16
- lr: 0.0005
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-it-5000-itquad-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
dussinus/pixelcopter-unit4-lr98e-5
|
dussinus
| 2023-03-18T10:42:41Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T10:42:38Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: pixelcopter-unit4-lr98e-5
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 27.20 +/- 16.41
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
rebolforces/Reinforce-Pixelcopter-PLE-v0
|
rebolforces
| 2023-03-18T10:41:02Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T10:40:56Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 17.10 +/- 22.67
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
vocabtrimmer/mt5-small-trimmed-it-10000-itquad-qg
|
vocabtrimmer
| 2023-03-18T10:33:18Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question generation",
"it",
"dataset:lmqg/qg_itquad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-18T10:32:48Z |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: it
datasets:
- lmqg/qg_itquad
pipeline_tag: text2text-generation
tags:
- question generation
widget:
- text: "<hl> Dopo il 1971 <hl> , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento."
example_title: "Question Generation Example 1"
- text: "L' individuazione del petrolio e lo sviluppo di nuovi giacimenti richiedeva in genere <hl> da cinque a dieci anni <hl> prima di una produzione significativa."
example_title: "Question Generation Example 2"
- text: "il <hl> Giappone <hl> è stato il paese più dipendente dal petrolio arabo."
example_title: "Question Generation Example 3"
model-index:
- name: vocabtrimmer/mt5-small-trimmed-it-10000-itquad-qg
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_itquad
type: default
args: default
metrics:
- name: BLEU4 (Question Generation)
type: bleu4_question_generation
value: 7.51
- name: ROUGE-L (Question Generation)
type: rouge_l_question_generation
value: 21.88
- name: METEOR (Question Generation)
type: meteor_question_generation
value: 17.78
- name: BERTScore (Question Generation)
type: bertscore_question_generation
value: 81.15
- name: MoverScore (Question Generation)
type: moverscore_question_generation
value: 57.1
---
# Model Card of `vocabtrimmer/mt5-small-trimmed-it-10000-itquad-qg`
This model is fine-tuned version of [vocabtrimmer/mt5-small-trimmed-it-10000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-it-10000) for question generation task on the [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [vocabtrimmer/mt5-small-trimmed-it-10000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-it-10000)
- **Language:** it
- **Training data:** [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="it", model="vocabtrimmer/mt5-small-trimmed-it-10000-itquad-qg")
# model prediction
questions = model.generate_q(list_context="Dopo il 1971 , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.", list_answer="Dopo il 1971")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "vocabtrimmer/mt5-small-trimmed-it-10000-itquad-qg")
output = pipe("<hl> Dopo il 1971 <hl> , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-it-10000-itquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_itquad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:-----------------------------------------------------------------|
| BERTScore | 81.15 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| Bleu_1 | 22.96 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| Bleu_2 | 15.06 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| Bleu_3 | 10.47 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| Bleu_4 | 7.51 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| METEOR | 17.78 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| MoverScore | 57.1 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| ROUGE_L | 21.88 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_itquad
- dataset_name: default
- input_types: paragraph_answer
- output_types: question
- prefix_types: None
- model: vocabtrimmer/mt5-small-trimmed-it-10000
- max_length: 512
- max_length_output: 32
- epoch: 14
- batch: 16
- lr: 0.001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-it-10000-itquad-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
Melanit/dreambooth_voyager_v2
|
Melanit
| 2023-03-18T10:31:38Z | 10 | 0 |
keras
|
[
"keras",
"tf-keras",
"keras-dreambooth",
"scifi",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-03-16T18:50:33Z |
---
library_name: keras
tags:
- keras-dreambooth
- scifi
license: cc-by-nc-4.0
---
## Model description
This Stable-Diffusion Model has been fine-tuned on images of the Star Trek Voyager Spaceship.
### Here are some examples that were created using the model using these settings:
Prompt: photo of voyager spaceship in space, high quality, blender, 3d, trending on artstation, 8k
Negative Prompt: bad, ugly, malformed, deformed, out of frame, blurry
Denoising Steps: 50






## Intended uses & limitations
Anyone may use this model for non-commercial usecases under the Linked License, as long as Paragraph 5 of the [Open RAIL-M License](https://raw.githubusercontent.com/CompVis/stable-diffusion/main/LICENSE) are respected as well. The original Model adheres under Open RAIL-M.
It was made solely as an experiment for keras_cv Dreambooth Training.
Since a lot of orthographic views were used, the model seems to be biased around them, and has issues creating more variance and poses. While inferring, the background appears noisy.
## Training and evaluation data
Images from Rob Bonchune from [Trekcore](https://blog.trekcore.com/) were used for training.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| inner_optimizer.class_name | Custom>RMSprop |
| inner_optimizer.config.name | RMSprop |
| inner_optimizer.config.weight_decay | None |
| inner_optimizer.config.clipnorm | None |
| inner_optimizer.config.global_clipnorm | None |
| inner_optimizer.config.clipvalue | None |
| inner_optimizer.config.use_ema | False |
| inner_optimizer.config.ema_momentum | 0.99 |
| inner_optimizer.config.ema_overwrite_frequency | 100 |
| inner_optimizer.config.jit_compile | True |
| inner_optimizer.config.is_legacy_optimizer | False |
| inner_optimizer.config.learning_rate | 0.0010000000474974513 |
| inner_optimizer.config.rho | 0.9 |
| inner_optimizer.config.momentum | 0.0 |
| inner_optimizer.config.epsilon | 1e-07 |
| inner_optimizer.config.centered | False |
| dynamic | True |
| initial_scale | 32768.0 |
| dynamic_growth_steps | 2000 |
| training_precision | mixed_float16 |
|
JanSt/albert-base-v2_mbti-classification
|
JanSt
| 2023-03-18T10:25:39Z | 655 | 14 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-15T22:17:36Z |

---
picture: https://en.wikipedia.org/wiki/Myers%E2%80%93Briggs_Type_Indicator
license: mit
language:
- en
metrics:
- bertscore
pipeline_tag: text-classification
library_name: transformers
---
|
taohoang/ppo-PyramidsTraining
|
taohoang
| 2023-03-18T10:23:06Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-03-18T10:23:01Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: taohoang/ppo-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
toreleon/combine-60-vsfc-xlm-r
|
toreleon
| 2023-03-18T10:19:58Z | 179 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-18T10:05:06Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
model-index:
- name: combine-60-vsfc-xlm-r
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# combine-60-vsfc-xlm-r
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2538
- Precision: 0.8786
- Recall: 0.9210
- F1 Weighted: 0.8993
- F1 Macro: 0.6284
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 Weighted | F1 Macro |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:-----------:|:--------:|
| 1.12 | 0.09 | 25 | 1.0597 | 0.2586 | 0.5085 | 0.3429 | 0.2247 |
| 0.9016 | 0.18 | 50 | 0.5441 | 0.8258 | 0.8642 | 0.8440 | 0.5895 |
| 0.6163 | 0.27 | 75 | 0.4097 | 0.8713 | 0.9109 | 0.8897 | 0.6215 |
| 0.4973 | 0.36 | 100 | 0.3429 | 0.8726 | 0.9135 | 0.8923 | 0.6234 |
| 0.4666 | 0.46 | 125 | 0.3091 | 0.8774 | 0.9198 | 0.8981 | 0.6277 |
| 0.4458 | 0.55 | 150 | 0.3671 | 0.8788 | 0.8888 | 0.8697 | 0.6153 |
| 0.386 | 0.64 | 175 | 0.2554 | 0.8811 | 0.9229 | 0.9012 | 0.6297 |
| 0.3975 | 0.73 | 200 | 0.2712 | 0.8834 | 0.9255 | 0.9037 | 0.6314 |
| 0.3293 | 0.82 | 225 | 0.2538 | 0.8786 | 0.9210 | 0.8993 | 0.6284 |
### Framework versions
- Transformers 4.27.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
dmenini/Reinforce-CartPole-v1
|
dmenini
| 2023-03-18T09:59:00Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T09:58:52Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
sudheer997/lilt-en-funsd
|
sudheer997
| 2023-03-18T09:49:34Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"lilt",
"token-classification",
"generated_from_trainer",
"dataset:funsd-layoutlmv3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-03-18T09:19:12Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- funsd-layoutlmv3
model-index:
- name: lilt-en-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lilt-en-funsd
This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the funsd-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4726
- Answer: {'precision': 0.8964677222898904, 'recall': 0.9008567931456548, 'f1': 0.8986568986568988, 'number': 817}
- Header: {'precision': 0.7446808510638298, 'recall': 0.5882352941176471, 'f1': 0.6572769953051643, 'number': 119}
- Question: {'precision': 0.8958517210944396, 'recall': 0.9424326833797586, 'f1': 0.918552036199095, 'number': 1077}
- Overall Precision: 0.8892
- Overall Recall: 0.9046
- Overall F1: 0.8968
- Overall Accuracy: 0.8387
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.4172 | 10.53 | 200 | 0.8947 | {'precision': 0.8194444444444444, 'recall': 0.8665850673194615, 'f1': 0.842355740630577, 'number': 817} | {'precision': 0.5284552845528455, 'recall': 0.5462184873949579, 'f1': 0.5371900826446281, 'number': 119} | {'precision': 0.845414847161572, 'recall': 0.8987929433611885, 'f1': 0.8712871287128714, 'number': 1077} | 0.8166 | 0.8649 | 0.8400 | 0.8019 |
| 0.0368 | 21.05 | 400 | 1.1681 | {'precision': 0.8507972665148064, 'recall': 0.9143206854345165, 'f1': 0.8814159292035397, 'number': 817} | {'precision': 0.45962732919254656, 'recall': 0.6218487394957983, 'f1': 0.5285714285714286, 'number': 119} | {'precision': 0.888671875, 'recall': 0.8449396471680595, 'f1': 0.866254164683484, 'number': 1077} | 0.8391 | 0.8599 | 0.8494 | 0.8104 |
| 0.0132 | 31.58 | 600 | 1.3663 | {'precision': 0.8438914027149321, 'recall': 0.9130966952264382, 'f1': 0.8771310993533216, 'number': 817} | {'precision': 0.6511627906976745, 'recall': 0.47058823529411764, 'f1': 0.5463414634146342, 'number': 119} | {'precision': 0.8687943262411347, 'recall': 0.9099350046425255, 'f1': 0.888888888888889, 'number': 1077} | 0.8494 | 0.8852 | 0.8669 | 0.8101 |
| 0.0061 | 42.11 | 800 | 1.4360 | {'precision': 0.8648018648018648, 'recall': 0.9082007343941249, 'f1': 0.8859701492537313, 'number': 817} | {'precision': 0.6867469879518072, 'recall': 0.4789915966386555, 'f1': 0.5643564356435644, 'number': 119} | {'precision': 0.8886910062333037, 'recall': 0.9266480965645311, 'f1': 0.9072727272727273, 'number': 1077} | 0.8706 | 0.8927 | 0.8815 | 0.8045 |
| 0.0043 | 52.63 | 1000 | 1.4084 | {'precision': 0.8550057537399309, 'recall': 0.9094247246022031, 'f1': 0.8813760379596678, 'number': 817} | {'precision': 0.6344086021505376, 'recall': 0.4957983193277311, 'f1': 0.5566037735849056, 'number': 119} | {'precision': 0.8842010771992819, 'recall': 0.914577530176416, 'f1': 0.8991328160657235, 'number': 1077} | 0.8608 | 0.8877 | 0.8741 | 0.8265 |
| 0.002 | 63.16 | 1200 | 1.4017 | {'precision': 0.8716136631330977, 'recall': 0.9057527539779682, 'f1': 0.8883553421368547, 'number': 817} | {'precision': 0.6593406593406593, 'recall': 0.5042016806722689, 'f1': 0.5714285714285715, 'number': 119} | {'precision': 0.8825088339222615, 'recall': 0.9275766016713092, 'f1': 0.9044816659121775, 'number': 1077} | 0.8682 | 0.8937 | 0.8808 | 0.8194 |
| 0.0018 | 73.68 | 1400 | 1.4379 | {'precision': 0.857307249712313, 'recall': 0.9118727050183598, 'f1': 0.8837485172004744, 'number': 817} | {'precision': 0.6761904761904762, 'recall': 0.5966386554621849, 'f1': 0.6339285714285715, 'number': 119} | {'precision': 0.8941068139963168, 'recall': 0.9015784586815228, 'f1': 0.8978270920018492, 'number': 1077} | 0.8675 | 0.8877 | 0.8775 | 0.8242 |
| 0.0014 | 84.21 | 1600 | 1.4741 | {'precision': 0.8871359223300971, 'recall': 0.8947368421052632, 'f1': 0.890920170627666, 'number': 817} | {'precision': 0.7590361445783133, 'recall': 0.5294117647058824, 'f1': 0.6237623762376238, 'number': 119} | {'precision': 0.8777969018932874, 'recall': 0.947075208913649, 'f1': 0.9111210361768646, 'number': 1077} | 0.8768 | 0.9011 | 0.8888 | 0.8407 |
| 0.0005 | 94.74 | 1800 | 1.5542 | {'precision': 0.871824480369515, 'recall': 0.9241126070991432, 'f1': 0.8972073677956032, 'number': 817} | {'precision': 0.7111111111111111, 'recall': 0.5378151260504201, 'f1': 0.6124401913875598, 'number': 119} | {'precision': 0.9029038112522686, 'recall': 0.9238625812441968, 'f1': 0.9132629646626893, 'number': 1077} | 0.8814 | 0.9011 | 0.8912 | 0.8219 |
| 0.0008 | 105.26 | 2000 | 1.4726 | {'precision': 0.8964677222898904, 'recall': 0.9008567931456548, 'f1': 0.8986568986568988, 'number': 817} | {'precision': 0.7446808510638298, 'recall': 0.5882352941176471, 'f1': 0.6572769953051643, 'number': 119} | {'precision': 0.8958517210944396, 'recall': 0.9424326833797586, 'f1': 0.918552036199095, 'number': 1077} | 0.8892 | 0.9046 | 0.8968 | 0.8387 |
| 0.0003 | 115.79 | 2200 | 1.5233 | {'precision': 0.8910179640718563, 'recall': 0.9106487148102815, 'f1': 0.900726392251816, 'number': 817} | {'precision': 0.71, 'recall': 0.5966386554621849, 'f1': 0.6484018264840181, 'number': 119} | {'precision': 0.9049773755656109, 'recall': 0.9285051067780873, 'f1': 0.916590284142988, 'number': 1077} | 0.8897 | 0.9016 | 0.8956 | 0.8354 |
| 0.0001 | 126.32 | 2400 | 1.5261 | {'precision': 0.8817966903073287, 'recall': 0.9130966952264382, 'f1': 0.8971737823211066, 'number': 817} | {'precision': 0.7319587628865979, 'recall': 0.5966386554621849, 'f1': 0.6574074074074073, 'number': 119} | {'precision': 0.8998194945848376, 'recall': 0.9257195914577531, 'f1': 0.9125858123569794, 'number': 1077} | 0.8844 | 0.9011 | 0.8927 | 0.8362 |
### Framework versions
- Transformers 4.27.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
toreleon/combine-20-vsfc-xlm-r
|
toreleon
| 2023-03-18T09:46:47Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-18T09:24:41Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
model-index:
- name: combine-20-vsfc-xlm-r
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# combine-20-vsfc-xlm-r
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2322
- Precision: 0.9414
- Recall: 0.9438
- F1 Weighted: 0.9409
- F1 Macro: 0.8449
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 Weighted | F1 Macro |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:-----------:|:--------:|
| 0.9866 | 0.12 | 25 | 0.8014 | 0.7299 | 0.7088 | 0.6862 | 0.4808 |
| 0.7581 | 0.24 | 50 | 0.5130 | 0.8573 | 0.8920 | 0.8717 | 0.6086 |
| 0.5523 | 0.36 | 75 | 0.4340 | 0.8637 | 0.9021 | 0.8812 | 0.6154 |
| 0.4144 | 0.47 | 100 | 0.3586 | 0.8664 | 0.9052 | 0.8841 | 0.6176 |
| 0.4314 | 0.59 | 125 | 0.2651 | 0.8946 | 0.9172 | 0.9009 | 0.6580 |
| 0.3391 | 0.71 | 150 | 0.2658 | 0.9078 | 0.9204 | 0.9116 | 0.7174 |
| 0.3441 | 0.83 | 175 | 0.2518 | 0.9198 | 0.9286 | 0.9190 | 0.7342 |
| 0.3624 | 0.95 | 200 | 0.2484 | 0.9273 | 0.9318 | 0.9173 | 0.7057 |
| 0.2703 | 1.07 | 225 | 0.2388 | 0.9348 | 0.9356 | 0.9261 | 0.7638 |
| 0.2913 | 1.18 | 250 | 0.2496 | 0.9281 | 0.9311 | 0.9209 | 0.7485 |
| 0.3268 | 1.3 | 275 | 0.2504 | 0.9317 | 0.9349 | 0.9279 | 0.7856 |
| 0.2692 | 1.42 | 300 | 0.2163 | 0.9277 | 0.9305 | 0.9239 | 0.7874 |
| 0.2913 | 1.54 | 325 | 0.2264 | 0.9270 | 0.9311 | 0.9256 | 0.7919 |
| 0.2416 | 1.66 | 350 | 0.2304 | 0.9371 | 0.9387 | 0.9333 | 0.8128 |
| 0.2158 | 1.78 | 375 | 0.2419 | 0.9359 | 0.9381 | 0.9338 | 0.8206 |
| 0.2593 | 1.9 | 400 | 0.2269 | 0.9382 | 0.9419 | 0.9370 | 0.8136 |
| 0.2331 | 2.01 | 425 | 0.2534 | 0.9364 | 0.9387 | 0.9341 | 0.8172 |
| 0.2067 | 2.13 | 450 | 0.2199 | 0.9404 | 0.9438 | 0.9407 | 0.8330 |
| 0.2102 | 2.25 | 475 | 0.2429 | 0.9288 | 0.9305 | 0.9270 | 0.8193 |
| 0.1696 | 2.37 | 500 | 0.2271 | 0.9378 | 0.9406 | 0.9382 | 0.8353 |
| 0.2598 | 2.49 | 525 | 0.2175 | 0.9360 | 0.9394 | 0.9370 | 0.8256 |
| 0.243 | 2.61 | 550 | 0.1947 | 0.9457 | 0.9482 | 0.9458 | 0.8520 |
| 0.1944 | 2.73 | 575 | 0.2052 | 0.9419 | 0.9450 | 0.9419 | 0.8354 |
| 0.1839 | 2.84 | 600 | 0.2186 | 0.9405 | 0.9425 | 0.9389 | 0.8358 |
| 0.1829 | 2.96 | 625 | 0.1944 | 0.9455 | 0.9476 | 0.9456 | 0.8583 |
| 0.1705 | 3.08 | 650 | 0.2410 | 0.9355 | 0.9387 | 0.9348 | 0.8223 |
| 0.1258 | 3.2 | 675 | 0.2225 | 0.9381 | 0.9400 | 0.9386 | 0.8475 |
| 0.11 | 3.32 | 700 | 0.2311 | 0.9410 | 0.9438 | 0.9417 | 0.8431 |
| 0.1619 | 3.44 | 725 | 0.2129 | 0.9411 | 0.9431 | 0.9419 | 0.8470 |
| 0.1698 | 3.55 | 750 | 0.2254 | 0.9388 | 0.9413 | 0.9395 | 0.8419 |
| 0.1495 | 3.67 | 775 | 0.2185 | 0.9408 | 0.9438 | 0.9403 | 0.8337 |
| 0.0989 | 3.79 | 800 | 0.2322 | 0.9414 | 0.9438 | 0.9409 | 0.8449 |
### Framework versions
- Transformers 4.27.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
taohoang/ppo-SnowballTarget
|
taohoang
| 2023-03-18T09:32:38Z | 13 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-03-18T09:05:39Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: taohoang/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
karimd188/finetuning-sentiment-model-3000-samples
|
karimd188
| 2023-03-18T09:31:51Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-06T18:48:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
|
jackhhhh/Taxi-v3
|
jackhhhh
| 2023-03-18T09:16:31Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T09:16:30Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.63
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jackhhhh/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jackhhhh/q-FrozenLake-v1-4x4-noSlippery
|
jackhhhh
| 2023-03-18T09:09:04Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T09:09:02Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="jackhhhh/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
KarosY/lianjia_2l_100per200_1e-4
|
KarosY
| 2023-03-18T09:06:33Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-03-18T06:27:56Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - https://huggingface.co/KarosY/lianjia_2l_100per200_1e-4
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were fine-tuned on the None dataset. You can find some example images in the following.




|
ShreyasM/Bonus-LunarLander-v2
|
ShreyasM
| 2023-03-18T09:01:29Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T09:01:17Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -113.68 +/- 80.34
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'ShreyasM/Bonus-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
Feldi/poca-SoccerTwos
|
Feldi
| 2023-03-18T08:49:18Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-03-18T08:49:12Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Feldi/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Ellipsoul/q-FrozenLake-v1-4x4-noSlippery
|
Ellipsoul
| 2023-03-18T08:48:47Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T08:48:38Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Ellipsoul/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
laserchalk/sketch-of-an-animal
|
laserchalk
| 2023-03-18T08:44:01Z | 35 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-18T08:39:59Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### sketch-of-an-animal Dreambooth model trained by laserchalk with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
MikolajDeja/facebook-nllb-200-distilled-600M-pl-en-yhavinga-ccmatrix-finetune
|
MikolajDeja
| 2023-03-18T07:45:18Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"dataset:ccmatrix",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-12T23:16:04Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
datasets:
- ccmatrix
model-index:
- name: facebook-nllb-200-distilled-600M-pl-en-yhavinga-ccmatrix-finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# facebook-nllb-200-distilled-600M-pl-en-yhavinga-ccmatrix-finetune
This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on the ccmatrix dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1
- Datasets 2.10.1
- Tokenizers 0.13.2
|
apparition/dqn-SpaceInvadersNoFrameskip-v4
|
apparition
| 2023-03-18T07:25:29Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T07:24:49Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 545.50 +/- 208.57
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga apparition -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga apparition -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga apparition
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1200000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
zwglory/wenet_efficient_conformer_aishell_v2
|
zwglory
| 2023-03-18T06:36:30Z | 0 | 1 | null |
[
"automatic-speech-recognition",
"en",
"zh",
"license:apache-2.0",
"region:us"
] |
automatic-speech-recognition
| 2023-03-18T03:39:33Z |
---
license: apache-2.0
language:
- en
- zh
metrics:
- cer
pipeline_tag: automatic-speech-recognition
---
## Efficient Conformer v2 for non-streaming ASR
**Specification**: https://github.com/wenet-e2e/wenet/pull/1636
## Aishell-1 Results
* Feature info:
* using fbank feature, cmvn, speed perturb, dither
* Training info:
* [train_u2++_efficonformer_v2.yaml](https://github.com/wenet-e2e/wenet/blob/main/examples/aishell/s0/conf/train_u2%2B%2B_efficonformer_v2.yaml)
* 8 gpu, batch size 16, acc_grad 1, 200 epochs
* lr 0.001, warmup_steps 25000
* Model info:
* Model Params: 49,354,651
* Downsample rate: 1/2 (conv2d2) * 1/4 (efficonformer block)
* encoder_dim 256, output_size 256, head 8, linear_units 2048
* num_blocks 12, cnn_module_kernel 15, group_size 3
* Decoding info:
* ctc_weight 0.5, reverse_weight 0.3, average_num 20
| decoding mode | full | 18 | 16 |
|------------------------|------|------|------|
| attention decoder | 4.87 | 5.03 | 5.07 |
| ctc prefix beam search | 4.97 | 5.18 | 5.20 |
| attention rescoring | 4.56 | 4.75 | 4.77 |
## Start to Use
Install **WeNet** follow: https://wenet.org.cn/wenet/install.html#install-for-training
Decode
```sh
cd wenet/examples/aishell/s0
dir=exp/wenet_efficient_conformer_aishell_v2/
ctc_weight=0.5
reverse_weight=0.3
decoding_chunk_size=-1
mode="attention_rescoring"
test_dir=$dir/test_${mode}
mkdir -p $test_dir
# Decode
nohup python wenet/bin/recognize.py --gpu 0 \
--mode $mode \
--config $dir/train.yaml \
--data_type "raw" \
--test_data data/test/data.list \
--checkpoint $dir/final.pt \
--beam_size 10 \
--batch_size 1 \
--penalty 0.0 \
--dict $dir/words.txt \
--ctc_weight $ctc_weight \
--reverse_weight $reverse_weight \
--result_file $test_dir/text \
${decoding_chunk_size:+--decoding_chunk_size $decoding_chunk_size} > logs/decode_aishell.log &
# CER
python tools/compute-cer.py --char=1 --v=1 \
data/test/text $test_dir/text > $test_dir/cer.txt
```
|
taohoang/Reinforce-Pixelcopter-PLE-v0
|
taohoang
| 2023-03-18T06:14:59Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-17T13:18:27Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 30.40 +/- 20.06
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
pfunk/CartPole-v1-DQPN_freq_200_0.99-seed3
|
pfunk
| 2023-03-18T06:00:06Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T06:00:03Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 390.63 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_200_0.99.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_freq_200_0.99]"
python -m cleanrl_utils.enjoy --exp-name DQPN_freq_200_0.99 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_200_0.99-seed3/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_200_0.99-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_200_0.99-seed3/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_200_0.99 --gamma 0.99 --policy-network-frequency 200 --seed 3
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'DQPN_freq_200_0.99',
'exploration_fraction': 0.2,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 200,
'policy_tau': 1.0,
'save_model': True,
'seed': 3,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-DQPN_freq_10000_0.99-seed3
|
pfunk
| 2023-03-18T05:58:55Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T05:58:52Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 445.04 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_10000_0.99.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_freq_10000_0.99]"
python -m cleanrl_utils.enjoy --exp-name DQPN_freq_10000_0.99 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_10000_0.99-seed3/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_10000_0.99-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_10000_0.99-seed3/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_10000_0.99 --gamma 0.99 --policy-network-frequency 10000 --seed 3
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'DQPN_freq_10000_0.99',
'exploration_fraction': 0.2,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 10000,
'policy_tau': 1.0,
'save_model': True,
'seed': 3,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-DQPN_freq_5000_0.99-seed3
|
pfunk
| 2023-03-18T05:58:13Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T05:58:10Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 343.71 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_5000_0.99.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_freq_5000_0.99]"
python -m cleanrl_utils.enjoy --exp-name DQPN_freq_5000_0.99 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_5000_0.99-seed3/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_5000_0.99-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_5000_0.99-seed3/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_5000_0.99 --gamma 0.99 --policy-network-frequency 5000 --seed 3
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'DQPN_freq_5000_0.99',
'exploration_fraction': 0.2,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 5000,
'policy_tau': 1.0,
'save_model': True,
'seed': 3,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-DQPN_freq_1000_0.99-seed4
|
pfunk
| 2023-03-18T05:57:19Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T05:57:16Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 385.92 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_1000_0.99.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_freq_1000_0.99]"
python -m cleanrl_utils.enjoy --exp-name DQPN_freq_1000_0.99 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_1000_0.99-seed4/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_1000_0.99-seed4/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_1000_0.99-seed4/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_1000_0.99 --gamma 0.99 --policy-network-frequency 1000 --seed 4
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'DQPN_freq_1000_0.99',
'exploration_fraction': 0.2,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 1000,
'policy_tau': 1.0,
'save_model': True,
'seed': 4,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-DQPN_freq_1000_0.99-seed2
|
pfunk
| 2023-03-18T05:57:03Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T05:57:00Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 220.31 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_1000_0.99.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_freq_1000_0.99]"
python -m cleanrl_utils.enjoy --exp-name DQPN_freq_1000_0.99 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_1000_0.99-seed2/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_1000_0.99-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_1000_0.99-seed2/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_1000_0.99 --gamma 0.99 --policy-network-frequency 1000 --seed 2
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'DQPN_freq_1000_0.99',
'exploration_fraction': 0.2,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 1000,
'policy_tau': 1.0,
'save_model': True,
'seed': 2,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
siuze/Cantonese-MDCC
|
siuze
| 2023-03-18T05:47:43Z | 0 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"can",
"dataset:mini_an4",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2023-03-18T04:42:24Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: can
datasets:
- mini_an4
license: cc-by-4.0
---
## ESPnet2 ASR model
### `siuze/Cantonese-MDCC`
This model was trained by siuze using mini_an4 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 52160d6ed337e9dec74dd59695fec1548042e0b2
pip install -e .
cd egs2/mini_an4/asr1
./run.sh --skip_data_prep false --skip_train true --download_model siuze/Cantonese-MDCC
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Fri Mar 17 23:08:24 CST 2023`
- python version: `3.8.16 | packaged by conda-forge | (default, Feb 1 2023, 16:01:55) [GCC 11.3.0]`
- espnet version: `espnet 202301`
- pytorch version: `pytorch 1.10.0`
- Git hash: `52160d6ed337e9dec74dd59695fec1548042e0b2`
- Commit date: `Thu Mar 16 21:37:39 2023 +0000`
## exp/asr_train_asr_transformer_raw_can_char
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_asr_model_valid.acc.ave/test|9077|108147|0.0|0.0|100.0|0.0|100.0|100.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_asr_model_valid.acc.ave/test|9077|666586|0.0|0.0|100.0|0.0|100.0|100.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_transformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_transformer_raw_can_char
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 30
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 8
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 16
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_can_char/train/speech_shape
- exp/asr_stats_raw_can_char/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_can_char/valid/speech_shape
- exp/asr_stats_raw_can_char/valid/text_shape.char
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
train_data_path_and_name_and_type:
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/raw/train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.005
scheduler: warmuplr
scheduler_conf:
warmup_steps: 30000
token_list:
- <blank>
- <unk>
- <space>
- '3'
- '2'
- '5'
- g
- o
- a
- n
- i
- '4'
- u
- e
- k
- '1'
- j
- y
- z
- s
- h
- d
- m
- l
- c
- b
- f
- t
- w
- p
- r
- x
- v
- q
- <sos/eos>
init: xavier_uniform
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf: null
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
aux_ctc_tasks: []
frontend: default
frontend_conf:
fs: 16k
specaug: null
specaug_conf: {}
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_can_char/train/feats_stats.npz
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
preencoder: null
preencoder_conf: {}
encoder: transformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d
normalize_before: true
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
preprocessor: default
preprocessor_conf: {}
required:
- output_dir
- token_list
version: '202301'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
bryjaco/my_tc_model
|
bryjaco
| 2023-03-18T05:44:06Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-17T23:26:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_tc_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93252
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_tc_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2298
- Accuracy: 0.9325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2323 | 1.0 | 1563 | 0.1874 | 0.9279 |
| 0.1472 | 2.0 | 3126 | 0.2298 | 0.9325 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.0+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
|
kailashsp/q-FrozenLake-v1-4x4-noSlippery
|
kailashsp
| 2023-03-18T05:41:19Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T05:41:14Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="kailashsp/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
aiartwork/a2c-PandaReachDense-v2
|
aiartwork
| 2023-03-18T05:33:57Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-17T09:42:33Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.61 +/- 0.16
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Raiden-1001/Reinforce-CartPole-v1
|
Raiden-1001
| 2023-03-18T05:26:44Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T05:26:34Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jjlira/ppo-SnowballTarget
|
jjlira
| 2023-03-18T05:14:20Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-03-18T04:54:16Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: jjlira/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
andylolu24/PyramidsRND
|
andylolu24
| 2023-03-18T05:03:58Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-03-18T05:03:41Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: andylolu24/PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
pfunk/CartPole-v1-DQPN_freq_200-seed4
|
pfunk
| 2023-03-18T04:54:30Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T04:54:27Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 495.20 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_200.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_freq_200]"
python -m cleanrl_utils.enjoy --exp-name DQPN_freq_200 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_200-seed4/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_200-seed4/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_200-seed4/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_200 --policy-network-frequency 200 --seed 4
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'DQPN_freq_200',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 200,
'policy_tau': 1.0,
'save_model': True,
'seed': 4,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-DQPN_freq_200-seed1
|
pfunk
| 2023-03-18T04:53:56Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T04:53:53Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_200.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_freq_200]"
python -m cleanrl_utils.enjoy --exp-name DQPN_freq_200 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_200-seed1/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_200-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_200-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_200 --policy-network-frequency 200 --seed 1
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'DQPN_freq_200',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 200,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-DQPN_freq_10000-seed2
|
pfunk
| 2023-03-18T04:53:22Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T04:53:19Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_10000.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_freq_10000]"
python -m cleanrl_utils.enjoy --exp-name DQPN_freq_10000 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_10000-seed2/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_10000-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_10000-seed2/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_10000 --policy-network-frequency 10000 --seed 2
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'DQPN_freq_10000',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 10000,
'policy_tau': 1.0,
'save_model': True,
'seed': 2,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-DQPN_freq_5000-seed3
|
pfunk
| 2023-03-18T04:53:12Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T04:53:09Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_5000.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_freq_5000]"
python -m cleanrl_utils.enjoy --exp-name DQPN_freq_5000 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_5000-seed3/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_5000-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_5000-seed3/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_5000 --policy-network-frequency 5000 --seed 3
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'DQPN_freq_5000',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 5000,
'policy_tau': 1.0,
'save_model': True,
'seed': 3,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-DQPN_freq_1000-seed2
|
pfunk
| 2023-03-18T04:53:03Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T04:52:59Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_1000.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_freq_1000]"
python -m cleanrl_utils.enjoy --exp-name DQPN_freq_1000 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_1000-seed2/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_1000-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_1000-seed2/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_1000 --policy-network-frequency 1000 --seed 2
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'DQPN_freq_1000',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 1000,
'policy_tau': 1.0,
'save_model': True,
'seed': 2,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-DQPN_freq_5000-seed2
|
pfunk
| 2023-03-18T04:53:02Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T04:52:57Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 54.07 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_5000.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_freq_5000]"
python -m cleanrl_utils.enjoy --exp-name DQPN_freq_5000 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_5000-seed2/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_5000-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_5000-seed2/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_5000 --policy-network-frequency 5000 --seed 2
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'DQPN_freq_5000',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 5000,
'policy_tau': 1.0,
'save_model': True,
'seed': 2,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-DQPN_freq_1000-seed3
|
pfunk
| 2023-03-18T04:52:56Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T04:52:53Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_1000.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_freq_1000]"
python -m cleanrl_utils.enjoy --exp-name DQPN_freq_1000 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_1000-seed3/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_1000-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_1000-seed3/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_1000 --policy-network-frequency 1000 --seed 3
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'DQPN_freq_1000',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 1000,
'policy_tau': 1.0,
'save_model': True,
'seed': 3,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-DQPN_freq_10000-seed1
|
pfunk
| 2023-03-18T04:52:37Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T04:52:33Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 237.27 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_10000.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_freq_10000]"
python -m cleanrl_utils.enjoy --exp-name DQPN_freq_10000 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_10000-seed1/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_10000-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_10000-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_10000 --policy-network-frequency 10000 --seed 1
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'DQPN_freq_10000',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 10000,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-DQPN_freq_5000-seed4
|
pfunk
| 2023-03-18T04:52:36Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T04:52:33Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_5000.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_freq_5000]"
python -m cleanrl_utils.enjoy --exp-name DQPN_freq_5000 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_5000-seed4/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_5000-seed4/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_5000-seed4/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_5000 --policy-network-frequency 5000 --seed 4
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'DQPN_freq_5000',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 5000,
'policy_tau': 1.0,
'save_model': True,
'seed': 4,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-DQPN_freq_1000-seed1
|
pfunk
| 2023-03-18T04:52:32Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T04:52:29Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 498.63 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_1000.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_freq_1000]"
python -m cleanrl_utils.enjoy --exp-name DQPN_freq_1000 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_1000-seed1/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_1000-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_1000-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_1000 --policy-network-frequency 1000 --seed 1
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'DQPN_freq_1000',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 1000,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/CartPole-v1-DQN_baseline-seed1
|
pfunk
| 2023-03-18T04:28:20Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-12T03:34:47Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **CartPole-v1**
This is a trained model of a DQN agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQN_baseline.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQN_baseline]"
python -m cleanrl_utils.enjoy --exp-name DQN_baseline --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQN_baseline-seed1/raw/main/dqn.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQN_baseline-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQN_baseline-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqn.py --exp-name DQN_baseline --seed 1 --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk
```
# Hyperparameters
```python
{'alg_type': 'dqn.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'DQN_baseline',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'save_model': True,
'seed': 1,
'start_e': 1.0,
'target_network_frequency': 20,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
Unggi/ko_hate_speech_KcELECTRA
|
Unggi
| 2023-03-18T03:43:31Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"text-classification",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-17T01:43:01Z |
---
license: cc-by-nc-sa-4.0
---
|
ozfan/BT5153-kaggle-sentiment-model-3000-samples
|
ozfan
| 2023-03-18T03:34:08Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-16T11:36:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: BT5153-kaggle-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BT5153-kaggle-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6160
- Accuracy: 0.9270
- F1: 0.9288
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2851 | 1.0 | 625 | 0.2058 | 0.9216 | 0.9231 |
| 0.1735 | 2.0 | 1250 | 0.2257 | 0.9244 | 0.9258 |
| 0.121 | 3.0 | 1875 | 0.2907 | 0.9232 | 0.9251 |
| 0.0525 | 4.0 | 2500 | 0.3607 | 0.9194 | 0.9219 |
| 0.0381 | 5.0 | 3125 | 0.4109 | 0.9216 | 0.9233 |
| 0.0257 | 6.0 | 3750 | 0.4142 | 0.9232 | 0.9244 |
| 0.0192 | 7.0 | 4375 | 0.4321 | 0.9230 | 0.9233 |
| 0.0126 | 8.0 | 5000 | 0.4745 | 0.9250 | 0.9278 |
| 0.01 | 9.0 | 5625 | 0.5053 | 0.9240 | 0.9246 |
| 0.0091 | 10.0 | 6250 | 0.5256 | 0.9240 | 0.9267 |
| 0.0062 | 11.0 | 6875 | 0.5798 | 0.9246 | 0.9255 |
| 0.0033 | 12.0 | 7500 | 0.5935 | 0.9242 | 0.9262 |
| 0.0019 | 13.0 | 8125 | 0.5891 | 0.9286 | 0.9303 |
| 0.0018 | 14.0 | 8750 | 0.6176 | 0.9266 | 0.9287 |
| 0.0001 | 15.0 | 9375 | 0.6160 | 0.9270 | 0.9288 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
moonx3/3moonmix
|
moonx3
| 2023-03-18T03:23:01Z | 0 | 27 | null |
[
"region:us"
] | null | 2023-02-17T15:44:34Z |
3moon mix의 파일들을 올립니다.
|
Agtian/llama-30b-int4
|
Agtian
| 2023-03-18T02:54:21Z | 5 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-03-18T01:56:34Z |
---
license: other
---
Converted with https://github.com/qwopqwop200/GPTQ-for-LLaMa
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
|
rebolforces/ppo-SnowballTarget
|
rebolforces
| 2023-03-18T02:49:50Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-03-18T02:49:44Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: rebolforces/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
HarBat/distilled_bert_finetuning
|
HarBat
| 2023-03-18T02:42:25Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:sst2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-12T18:00:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- sst2
model-index:
- name: distilled_bert_finetuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled_bert_finetuning
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the sst2 dataset.
Label 0 is Negative
Label 1 is Positive
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.21.2
- Pytorch 1.11.0+cu113
- Datasets 2.5.2
- Tokenizers 0.12.1
|
coreml-community/coreml-HassanBlend
|
coreml-community
| 2023-03-18T02:06:21Z | 0 | 7 | null |
[
"coreml",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2022-12-29T21:38:05Z |
---
license: creativeml-openrail-m
tags:
- coreml
- stable-diffusion
- text-to-image
---
# Core ML Converted Model:
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-files-to-Core-ML).<br>
- Provide the model to an app such as [Mochi Diffusion](https://github.com/godly-devotion/MochiDiffusion) to generate images.<br>
- `split_einsum` version is compatible with all compute unit options including Neural Engine.<br>
- `original` version is only compatible with CPU & GPU option.<br>
# Note: Some models do not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
# HassanBlend1.5:
Source(s): Hugging Face: [1.4](https://huggingface.co/hassanblend/hassanblend1.4) - [1.5.1.2](https://huggingface.co/hassanblend/HassanBlend1.5.1.2) - [CivitAI](https://civitai.com/models/1173/hassanblend-1512-and-previous-versions)
I am hassan, I created HassansBlend, the latest version currently is 1.5.1.2 I continue to iterate and improve on this model over time. Feel free to check out our discord or rentry page for more examples with prompts and outputs generated.
This blend is finetuned over SD1.5 with thousands of images included in the dataset it was trained with. Along with that there are some minor merges added in just to soften it up and increase the creativity.
I have also some custom created content such as enhancement hypernetworks/embeddings etc for patreons or KoFi subscribers only on my pages below
<b> Links </b><br>
<b>Patreon</b>
<a href="https://www.patreon.com/sd_hassan" target="_blank"><img src="https://i.imgur.com/sR32SqJ.jpg"></img></a>
<b>KoFi</b>
<a href="https://ko-fi.com/sdhassan" target="_blank"><img src="https://i.imgur.com/0P7CTN4.png"></img></a>
<b>Discord</b>
<a href="https://discord.gg/sdmodelers" target="_blank"><img src="https://i.imgur.com/HC1iHwg.png"></img></a>
### Quicklinks:
* [Latest Setup](https://rentry.org/sdhassan#current-setup)
* [HassanBlend Model Finetune Updates](https://rentry.org/sdhassan#hassanblend-finetuning-updates)
* [Latest Patreon Posts](https://rentry.org/sdhassan#patreon-posts)
* [Models](https://rentry.org/sdhassan#models)
* [HassanBlend1.5](https://rentry.org/sdhassan#hassanblend15-downloads)
* [HassanBlend1.4](https://rentry.org/sdhassan#hassanblend14-downloads)
* [Prompts](https://rentry.org/sdhassan#prompts)
* [Photorealistic Tips](https://rentry.org/sdhassan#tips-for-photorealistic-images)
* [Embeddings](https://rentry.org/sdhassan#embeddings)
* [Hypernetworks](https://rentry.org/sdhassan#hypernetworks)
* [Wildcards](https://rentry.org/sdhassan#wildcards-i-made)
* [MyTools](https://rentry.org/sdhassan#my-tools)
* [Settings I use](https://rentry.org/sdhassan#settings)
Model details and examples with sample prompts: https://rentry.org/sdhassan
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
SummerSigh/T5-Base-EvilPrompterRM
|
SummerSigh
| 2023-03-18T01:37:01Z | 50 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-03-12T06:51:11Z |
---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="SummerSigh//tmp/tmp_wiiw7_h/SummerSigh/T5-Base-EvilPrompterRM")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("SummerSigh//tmp/tmp_wiiw7_h/SummerSigh/T5-Base-EvilPrompterRM")
model = AutoModelForCausalLMWithValueHead.from_pretrained("SummerSigh//tmp/tmp_wiiw7_h/SummerSigh/T5-Base-EvilPrompterRM")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
Jasmin0600/Taxi
|
Jasmin0600
| 2023-03-18T01:17:26Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T01:17:08Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Jasmin0600/Taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Jasmin0600/FrozenLake
|
Jasmin0600
| 2023-03-18T01:00:54Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-18T00:59:14Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: FrozenLake
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Jasmin0600/FrozenLake", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mrm8488/t5-small-finetuned-wikisql-sql-nl-nl-sql
|
mrm8488
| 2023-03-18T00:15:12Z | 114 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-07T14:55:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
widget:
- text: "translate to SQL: How many models with BERT architecture are in the HuggingFace Hub?"
- text: "translate to English: SELECT COUNT Model FROM table WHERE Architecture = RoBERTa AND creator = Manuel Romero"
metrics:
- bleu
model-index:
- name: t5-small-finetuned-wikisql-sql-nl-nl-sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikisql-sql-nl-nl-sql
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1932
- Bleu: 41.8787
- Gen Len: 16.6251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.2655 | 1.0 | 8097 | 0.2252 | 39.7999 | 16.6893 |
| 0.2401 | 2.0 | 16194 | 0.2066 | 40.9456 | 16.6712 |
| 0.2236 | 3.0 | 24291 | 0.1985 | 41.3509 | 16.5884 |
| 0.2158 | 4.0 | 32388 | 0.1944 | 41.6988 | 16.6165 |
| 0.2122 | 5.0 | 40485 | 0.1932 | 41.8787 | 16.6251 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
lipee/ppo-SnowballTarget
|
lipee
| 2023-03-17T23:11:55Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-03-17T23:11:49Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: lipee/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
PavanDeepak/ppo-LunarLander-v2
|
PavanDeepak
| 2023-03-17T23:04:18Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-17T23:03:48Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 206.91 +/- 48.55
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jprivx/urpm13
|
jprivx
| 2023-03-17T22:58:23Z | 5 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-03-16T20:25:01Z |
---
title: uberRealisticPornMerge_urpmv13
emoji: 📚
colorFrom: green
colorTo: indigo
sdk: gradio
sdk_version: 3.11.0
app_file: app.py
pinned: false
license: creativeml-openrail-m
tags:
- text-to-image
inference: true
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
adzcai/dqn-SpaceInvadersNoFrameskip-v4
|
adzcai
| 2023-03-17T22:57:20Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-17T22:53:55Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 736.00 +/- 208.04
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga adzcai -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga adzcai -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga adzcai
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
borgsid/borgsidlukee
|
borgsid
| 2023-03-17T22:56:58Z | 34 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-17T22:54:53Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### borgsidlukee Dreambooth model trained by borgsid with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
engianx/distilbert-base-uncased-finetuned-imdb
|
engianx
| 2023-03-17T22:37:16Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-03-17T22:34:41Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: engianx/distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# engianx/distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.6445
- Validation Loss: 3.3436
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -936, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.6445 | 3.3436 | 0 |
### Framework versions
- Transformers 4.27.1
- TensorFlow 2.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
galsenai/wav2vec2-large-waxal-keyword-spotting
|
galsenai
| 2023-03-17T22:36:57Z | 171 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-03-17T22:32:16Z |
---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
metrics:
- accuracy
- precision
- f1
model-index:
- name: wav2vec2-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large
This model is a fine-tuned version of [facebook/wav2vec2-large](https://huggingface.co/facebook/wav2vec2-large) on the galsenai/waxal_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3413
- Accuracy: 0.9443
- Precision: 0.9780
- F1: 0.9604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 32.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|
| 4.6314 | 1.01 | 500 | 4.9165 | 0.0205 | 0.0028 | 0.0049 |
| 3.7739 | 2.02 | 1000 | 4.4491 | 0.0356 | 0.0750 | 0.0252 |
| 2.5035 | 3.04 | 1500 | 4.1429 | 0.1129 | 0.2672 | 0.1114 |
| 1.5633 | 4.05 | 2000 | 3.1973 | 0.3676 | 0.6598 | 0.3830 |
| 1.0538 | 5.06 | 2500 | 2.5479 | 0.5889 | 0.8417 | 0.6557 |
| 0.7422 | 6.07 | 3000 | 1.4494 | 0.7825 | 0.8921 | 0.8194 |
| 0.5762 | 7.08 | 3500 | 1.3168 | 0.7726 | 0.9277 | 0.8267 |
| 0.46 | 8.1 | 4000 | 0.8783 | 0.8564 | 0.9532 | 0.8982 |
| 0.4007 | 9.11 | 4500 | 0.7524 | 0.8738 | 0.9637 | 0.9137 |
| 0.3374 | 10.12 | 5000 | 0.6386 | 0.8852 | 0.9678 | 0.9221 |
| 0.3108 | 11.13 | 5500 | 0.5049 | 0.9106 | 0.9681 | 0.9373 |
| 0.2735 | 12.15 | 6000 | 0.6097 | 0.8905 | 0.9624 | 0.9226 |
| 0.2716 | 13.16 | 6500 | 0.4543 | 0.9000 | 0.9569 | 0.9206 |
| 0.2484 | 14.17 | 7000 | 0.3965 | 0.9272 | 0.9742 | 0.9489 |
| 0.228 | 15.18 | 7500 | 0.6807 | 0.8856 | 0.9777 | 0.9257 |
| 0.2307 | 16.19 | 8000 | 0.5219 | 0.9174 | 0.9802 | 0.9464 |
| 0.2169 | 17.21 | 8500 | 0.4630 | 0.9121 | 0.9677 | 0.9338 |
| 0.1997 | 18.22 | 9000 | 0.5152 | 0.9128 | 0.9740 | 0.9398 |
| 0.1921 | 19.23 | 9500 | 0.5105 | 0.9144 | 0.9867 | 0.9476 |
| 0.1825 | 20.24 | 10000 | 0.6302 | 0.9053 | 0.9832 | 0.9407 |
| 0.1786 | 21.25 | 10500 | 0.4602 | 0.9272 | 0.9813 | 0.9524 |
| 0.1671 | 22.27 | 11000 | 0.5443 | 0.9147 | 0.9794 | 0.9444 |
| 0.1623 | 23.28 | 11500 | 0.3413 | 0.9443 | 0.9780 | 0.9604 |
| 0.1595 | 24.29 | 12000 | 0.4478 | 0.9288 | 0.9813 | 0.9531 |
| 0.151 | 25.3 | 12500 | 0.4178 | 0.9360 | 0.9818 | 0.9571 |
| 0.1472 | 26.32 | 13000 | 0.4154 | 0.9356 | 0.9833 | 0.9578 |
| 0.1473 | 27.33 | 13500 | 0.4549 | 0.9318 | 0.9837 | 0.9561 |
| 0.131 | 28.34 | 14000 | 0.3574 | 0.9424 | 0.9845 | 0.9621 |
| 0.134 | 29.35 | 14500 | 0.4475 | 0.9333 | 0.9840 | 0.9568 |
| 0.1282 | 30.36 | 15000 | 0.4012 | 0.9382 | 0.9837 | 0.9591 |
| 0.1307 | 31.38 | 15500 | 0.3552 | 0.9428 | 0.9847 | 0.9624 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
Jartemio/The_Owl_Characters_V2
|
Jartemio
| 2023-03-17T22:34:31Z | 62 | 7 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"the-owl-house",
"en",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-08T07:54:51Z |
---
license: openrail
language:
- en
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- the-owl-house
library_name: diffusers
---
<style>
table {
border-collapse: collapse;
width: 100%;
opacity: 0.8;
}
td {
border: none;
padding: 0px;
}
img {
max-width: 100%;
}
tr {
border-top: none;
border-bottom: none;
}
</style>
# THE OWL CHARACTERS
# Model trained with the characters from the series The Owl House and their drawing style using
[EveryDream Trainer 2.0](https://github.com/victorchall/EveryDream2trainer).
# I created the dataset by extracting images from the episodes uploaded on [TheOwlClub.net](https://www.theowlclub.net/).
#### Try the model on Google Colab:
[](https://colab.research.google.com/github/jartemio/The_Owl_Characters_V2/blob/main/The_Owl_Characters_V2_English.ipynb)
[](https://colab.research.google.com/github/jartemio/The_Owl_Characters_V2/blob/main/The_Owl_Characters_V2_Espanol.ipynb)
[](https://colab.research.google.com/github/jartemio/The_Owl_Characters_V2/blob/main/The_Owl_Characters_V2_Korean.ipynb)
[](https://colab.research.google.com/github/jartemio/The_Owl_Characters_V2/blob/main/The_Owl_Characters_V2_中文.ipynb)
#### The style training was done using the key **aniscreen**:
<table>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmp3efgc8mjfgggbvlh.png"
alt="tmp3efgc8mjfgggbvlh.png" title="tmp3efgc8mjfgggbvlh.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpgb6xpryl.png"
alt="tmpgb6xpryl.png" title="tmpgb6xpryl.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpk_zes93b.png"
alt="tmpk_zes93b.png" title="tmpk_zes93b.png" />
</td>
</tr>
</table>
#### The trained characters along with their keys are:
- **LuzNoceda**
- With aniscreen:
<table>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmptgbbp8ed.png"
alt="tmptgbbp8ed.png" title="tmptgbbp8ed.png"/>
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpwtpohflb.png"
alt="tmpwtpohflb.png" title="tmpwtpohflb.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpcsfzask1umbh376s.png"
alt="tmpcsfzask1umbh376s.png" title="tmpk_zes93b.png" />
</td>
</tr>
</table>
- Without aniscreen:
<table>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpl5egvhsig9dmb_3y.png"
alt="tmp7komqa85aigpx857.png" title="tmpl5egvhsig9dmb_3y.png"/>
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmp7komqa85aigpx857.png"
alt="tmp7komqa85aigpx857.png" title="tmp7komqa85aigpx857.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpca1fmrfa.png"
alt="tmpca1fmrfa.png" title="tmpca1fmrfa.png" />
</td>
</tr>
</table>
- **AmityBlight**
- With aniscreen:
<table>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpr9_9vfxfxg4cl73p.png"
alt="tmpr9_9vfxfxg4cl73p.png" title="tmpr9_9vfxfxg4cl73p.png"/>
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmp_wj6zbn3mba2ts4x.png"
alt="tmp_wj6zbn3mba2ts4x.png" title="tmp_wj6zbn3mba2ts4x.png"/>
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmp3pb40xo9.png"
alt="tmp3pb40xo9.png" title="tmp3pb40xo9.png" />
</td>
</tr>
</table>
- Without aniscreen:
<table>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmp4rbp88kveno3qu_1.png"
alt="tmp4rbp88kveno3qu_1.png" title="tmp4rbp88kveno3qu_1.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpnoa_8azgzmrecu05.png"
alt="tmpnoa_8azgzmrecu05.png" title="tmpnoa_8azgzmrecu05.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmplt00ac1a.png"
alt="tmplt00ac1a.png" title="tmplt00ac1a.png" />
</td>
</tr>
</table>
- **HunterGoldenGuard**
- With aniscreen:
<table>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpb6c72jw0.png"
alt="tmpb6c72jw0.png" title="tmpb6c72jw0.png"/>
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpu9pqtyihwcw67pai.png"
alt="tmpu9pqtyihwcw67pai.png" title="tmpu9pqtyihwcw67pai.png"/>
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpc_fs0t43hiu791u4.png"
alt="tmpc_fs0t43hiu791u4.png" title="tmpc_fs0t43hiu791u4.png" />
</td>
</tr>
</table>
- Without aniscreen:
<table>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmp6bfn0rnp.png"
alt="tmp6bfn0rnp.png" title="tmp6bfn0rnp.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmph0bgjpmz.png"
alt="tmph0bgjpmz.png" title="tmph0bgjpmz.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpl49_ctpw.png"
alt="tmpl49_ctpw.png" title="tmpl49_ctpw.png" />
</td>
</tr>
</table>
- **WillowPark**
- With aniscreen:
<table>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmp_bkcx8fv.png"
alt="tmp_bkcx8fv.png" title="tmp_bkcx8fv.png"/>
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpds0zimpd.png"
alt="tmpds0zimpd.png" title="tmpds0zimpd.png"/>
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpkb959lzx.png"
alt="tmpkb959lzx.png" title="tmpkb959lzx.png" />
</td>
</tr>
</table>
- Without aniscreen:
<table>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmp488gydue.png"
alt="tmp488gydue.png" title="tmp488gydue.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpt7dwc1lo.png"
alt="tmpt7dwc1lo.png" title="tmpt7dwc1lo.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpn_yfd46q.png"
alt="tmpn_yfd46q.png" title="tmpn_yfd46q.png" />
</td>
</tr>
</table>
- **GusPotter** *(this is a special situation, because instead of the key being GusPotter, it is *GusPorter* due to an error in my data.)*
- With aniscreen:
<table>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpduyag_7b.png"
alt="tmpduyag_7b.png" title="tmpduyag_7b.png"/>
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpolrx5lvp.png"
alt="tmpolrx5lvp.png" title="tmpolrx5lvp.png"/>
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpq7a_7toa.png"
alt="tmpq7a_7toa.png" title="tmpq7a_7toa.png" />
</td>
</tr>
</table>
- Without aniscreen:
<table>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmps7mhg6yf.png"
alt="tmps7mhg6yf.png" title="tmps7mhg6yf.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpw7qge_o7.png"
alt="tmpw7qge_o7.png" title="tmpw7qge_o7.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpqcrlumu0.png"
alt="tmpqcrlumu0.png" title="tmpqcrlumu0.png" />
</td>
</tr>
</table>
- **EdalynClawthorne**
- With aniscreen:
<table>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmp26_fwtvn.png"
alt="tmp26_fwtvn.png" title="tmp26_fwtvn.png"/>
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpogurcgd0.png"
alt="tmpogurcgd0.png" title="tmpogurcgd0.png"/>
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpxixqqz14.png"
alt="tmpxixqqz14.png" title="tmpxixqqz14.png" />
</td>
</tr>
</table>
- Without aniscreen:
<table>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmp4izfh0_n.png"
alt="tmp4izfh0_n.png" title="tmp4izfh0_n.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpams1jey5.png"
alt="tmpams1jey5.png" title="tmpams1jey5.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmp1vto1bjj.png"
alt="tmp1vto1bjj.png" title="tmp1vto1bjj.png" />
</td>
</tr>
</table>
- **LilithClawthorne**
- With aniscreen:
<table>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpndb6ckl_.png"
alt="tmpndb6ckl_.png" title="tmpndb6ckl_.png"/>
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpqqx1hwdr.png"
alt="tmpqqx1hwdr.png" title="tmpqqx1hwdr.png"/>
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpi7cjlnv5.png"
alt="tmpi7cjlnv5.png" title="tmpi7cjlnv5.png" />
</td>
</tr>
</table>
- Without aniscreen:
<table>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpejtt2q6l.png"
alt="tmpejtt2q6l.png" title="tmpejtt2q6l.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpr1uu2lqc.png"
alt="tmpr1uu2lqc.png" title="tmpr1uu2lqc.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmp71ryh4qs.png"
alt="tmp71ryh4qs.png" title="tmp71ryh4qs.png" />
</td>
</tr>
</table>
- **RaineWhispers**
- With aniscreen:
<table>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpw8l2_i3p.png"
alt="tmpw8l2_i3p.png" title="tmpw8l2_i3p.png"/>
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmptkrpkvr3.png"
alt="tmptkrpkvr3.png" title="tmptkrpkvr3.png"/>
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmp00ihavmn.png"
alt="tmp00ihavmn.png" title="tmp00ihavmn.png" />
</td>
</tr>
</table>
- Without aniscreen:
<table>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmp7_7sapkd.png"
alt="tmp7_7sapkd.png" title="tmp7_7sapkd.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmph5jt2jhc.png"
alt="tmph5jt2jhc.png" title="tmph5jt2jhc.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpqm9gdji2.png"
alt="tmpqm9gdji2.png" title="tmpqm9gdji2.png" />
</td>
</tr>
</table>
- **The following results were not very good, so they will be improved:**
- **EmperorBelos:**
<table>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpfwe2r_n9.png"
alt="tmpfwe2r_n9.png" title="tmpfwe2r_n9.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpcwpxky0h.png"
alt="tmph5jt2jhc.png" title="tmpcwpxky0h.png" />
</td>
</tr>
</table>
- **KingClawthorne:**
<table>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmplyxshw2s.png"
alt="tmplyxshw2s.png" title="tmplyxshw2s.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpleuknqw7.png"
alt="tmpleuknqw7.png" title="tmpleuknqw7.png" />
</td>
</tr>
</table>
- **TheCollector:**
<table>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpy9g16w2e.png"
alt="tmpy9g16w2e.png" title="tmpy9g16w2e.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpewst_292.png"
alt="tmpewst_292.png" title="tmpewst_292.png" />
</td>
</tr>
</table>
#### Other images related to the model:
<table>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpybbcktbushsmdjbg.png"
alt="tmpybbcktbushsmdjbg.png" title="tmpybbcktbushsmdjbg.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpsk3cma8w.png"
alt="tmpsk3cma8w.png" title="tmpsk3cma8w.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpr9_9vfxfxg4cl73p.png"
alt="tmpr9_9vfxfxg4cl73p.png" title="tmpr9_9vfxfxg4cl73p.png" />
</td>
</tr>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmppqseq15u52fvm0re.png"
alt="tmppqseq15u52fvm0re.png" title="tmppqseq15u52fvm0re.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpj3aav7hkft2r_m1b.png"
alt="tmpj3aav7hkft2r_m1b.png" title="tmpj3aav7hkft2r_m1b.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmph4or82wbzk31rvbe.png"
alt="tmph4or82wbzk31rvbe.png" title="tmph4or82wbzk31rvbe.png" />
</td>
</tr>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpfy02p5xq.png"
alt="tmpfy02p5xq.png" title="tmpfy02p5xq.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpmh7omsn7.png"
alt="tmpmh7omsn7.png" title="tmpmh7omsn7.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpwww_beg5.jpg"
alt="tmpwww_beg5.jpg" title="tmpwww_beg5.jpg" />
</td>
</tr>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/de41502c-7572-44c7-aed0-12cf85600fa3.jfif"
alt="de41502c-7572-44c7-aed0-12cf85600fa3.jfif" title="de41502c-7572-44c7-aed0-12cf85600fa3.jfif" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/4c0086f7-5940-47aa-8e86-a06f8b220501.jfif"
alt="4c0086f7-5940-47aa-8e86-a06f8b220501.jfif" title="4c0086f7-5940-47aa-8e86-a06f8b220501.jfif" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/e4ee0845-9a64-40a8-8106-a5a941699364.jfif"
alt="e4ee0845-9a64-40a8-8106-a5a941699364.jfif" title="e4ee0845-9a64-40a8-8106-a5a941699364.jfif" />
</td>
</tr>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/fe40c6ab-88b9-4bad-b38b-eb9e7a0cfacf.jfif"
alt="fe40c6ab-88b9-4bad-b38b-eb9e7a0cfacf.jfif" title="fe40c6ab-88b9-4bad-b38b-eb9e7a0cfacf.jfif" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/c7c39df4-3b03-4e68-b7fb-9ae7a96f8809.jfif"
alt="c7c39df4-3b03-4e68-b7fb-9ae7a96f8809.jfif" title="c7c39df4-3b03-4e68-b7fb-9ae7a96f8809.jfif" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/3744481e-ecc0-46ab-ad9a-4e8374cb4d98.jfif"
alt="3744481e-ecc0-46ab-ad9a-4e8374cb4d98.jfif" title="3744481e-ecc0-46ab-ad9a-4e8374cb4d98.jfif" />
</td>
</tr>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/5c31aaa7-779a-4edb-9722-5fd4be7f30df.jfif"
alt="5c31aaa7-779a-4edb-9722-5fd4be7f30df.jfif" title="5c31aaa7-779a-4edb-9722-5fd4be7f30df.jfif" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/8d84f980-4508-4440-8414-ff7c6aca3a09.jfif"
alt="8d84f980-4508-4440-8414-ff7c6aca3a09.jfif" title="8d84f980-4508-4440-8414-ff7c6aca3a09.jfif" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/9127b248-c3be-4302-9653-2f661d257a6a.jfif"
alt="9127b248-c3be-4302-9653-2f661d257a6a.jfif" title="9127b248-c3be-4302-9653-2f661d257a6a.jfif" />
</td>
</tr>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/e7e13379-8e65-42c8-aa5a-a44e0efdffdf.jfif"
alt="e7e13379-8e65-42c8-aa5a-a44e0efdffdf.jfif" title="e7e13379-8e65-42c8-aa5a-a44e0efdffdf.jfif" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/9c320730-08cf-461a-9bca-30086ad818a3.jfif"
alt="9c320730-08cf-461a-9bca-30086ad818a3.jfif" title="9c320730-08cf-461a-9bca-30086ad818a3.jfif" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/cf4b225f-53ab-442c-abb3-0ca33cdb4207.jfif"
alt="cf4b225f-53ab-442c-abb3-0ca33cdb4207.jfif" title="cf4b225f-53ab-442c-abb3-0ca33cdb4207.jfif" />
</td>
</tr>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/808fa3af-772f-4365-a775-4d230ed2a2d5.jfif"
alt="808fa3af-772f-4365-a775-4d230ed2a2d5.jfif" title="808fa3af-772f-4365-a775-4d230ed2a2d5.jfif" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpqzmq0zi3.png"
alt="tmpqzmq0zi3.png" title="tmpqzmq0zi3.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmp7uhbyh_r.png"
alt="tmp7uhbyh_r.png" title="tmp7uhbyh_r.png" />
</td>
</tr>
<tr>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmptfvhyn1tca4prsie.png"
alt="tmptfvhyn1tca4prsie.png" title="tmptfvhyn1tca4prsie.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpme_bpm5q.png"
alt="tmpme_bpm5q.png" title="tmpme_bpm5q.png" />
</td>
<td align="center">
<img
src="https://huggingface.co/Jartemio/The_Owl_Characters_V2/resolve/main/images/tmpn37_afcj.png"
alt="tmpn37_afcj.png" title="tmpn37_afcj.png" />
</td>
</tr>
</table>
**Note**: *The following characters were also trained but the desired results were not obtained. They
will be fixed in future updates:*
- *HunterGoldenGuard, RaineWhispers, TheCollector, WillowPark, KingClawthorne, EdalynClawthorne, EmperorBelos,
GusPotter, LilithClawthorne*
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
golightly/dqn-SpaceInvadersNoFrameskip-v4
|
golightly
| 2023-03-17T22:28:58Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-17T22:28:18Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 526.00 +/- 180.57
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga golightly -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga golightly -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga golightly
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
jaybeeja/ReinforceCartpoleLatest
|
jaybeeja
| 2023-03-17T22:19:43Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-17T22:19:32Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: ReinforceCartpoleLatest
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
abhijitt/bert_st_qa_all-MiniLM-L12-v2-epochs-1
|
abhijitt
| 2023-03-17T21:50:11Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-03-17T21:48:38Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1369 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 136,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Gustavosta/MagicPrompt-Dalle
|
Gustavosta
| 2023-03-17T21:38:43Z | 1,407 | 48 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-09-18T03:47:03Z |
---
license: mit
---
# MagicPrompt - Dall-E 2
This is a model from the MagicPrompt series of models, which are [GPT-2](https://huggingface.co/gpt2) models intended to generate prompt texts for imaging AIs, in this case: [Dall-E 2](https://openai.com/dall-e-2/).
## 🖼️ Here's an example:
<img src="https://files.catbox.moe/h10plz.png">
This model was trained with a set of about 26k of data filtered and extracted from various places such as: [The Web Archive](https://web.archive.org/web/*/https://labs.openai.com/s/*), [The SubReddit for Dall-E 2](https://www.reddit.com/r/dalle2) and [dalle2.gallery](https://dalle2.gallery/#search). This may be a relatively small dataset, but we have to consider that Dall-E 2 is a closed service and we only have prompts from people who share it and have access to the service, for now. The set was trained with about 40,000 steps and I have plans to improve the model if possible.
If you want to test the model with a demo, you can go to: "[spaces/Gustavosta/MagicPrompt-Dalle](https://huggingface.co/spaces/Gustavosta/MagicPrompt-Dalle)".
## 💻 You can see other MagicPrompt models:
- For Stable Diffusion: [Gustavosta/MagicPrompt-Stable-Diffusion](https://huggingface.co/Gustavosta/MagicPrompt-Stable-Diffusion)
- For Midjourney: [Gustavosta/MagicPrompt-Midjourney](https://huggingface.co/Gustavosta/MagicPrompt-Midjourney) **[⚠️ In progress]**
- MagicPrompt full: [Gustavosta/MagicPrompt](https://huggingface.co/Gustavosta/MagicPrompt) **[⚠️ In progress]**
## ⚖️ Licence:
[MIT](https://huggingface.co/models?license=license:mit)
When using this model, please credit: [Gustavosta](https://huggingface.co/Gustavosta)
**Thanks for reading this far! :)**
|
abhijitt/bert_st_qa_msmarco-bert-base-dot-v5-epochs-1
|
abhijitt
| 2023-03-17T21:32:59Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-03-17T21:28:04Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1369 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 136,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jcramirezpr/ppo-SnowballTarget
|
jcramirezpr
| 2023-03-17T21:27:24Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-03-17T21:27:18Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: jcramirezpr/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
MakiPan/ppo-Huggy
|
MakiPan
| 2023-03-17T21:24:38Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-03-17T21:24:28Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: MakiPan/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.