modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-01 12:28:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 530
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-01 12:27:35
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Brainergy/ppaattaass
|
Brainergy
| 2023-01-10T23:12:47Z | 31 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-01-10T23:02:09Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### ppaattaass Dreambooth model trained by Brainergy with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
cleanrl/Tennis-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1
|
cleanrl
| 2023-01-10T23:09:47Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Tennis-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T23:09:43Z |
---
tags:
- Tennis-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Tennis-v5
type: Tennis-v5
metrics:
- type: mean_reward
value: -0.30 +/- 0.64
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Tennis-v5**
This is a trained model of a PPO agent playing Tennis-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppo_atari_envpool_async_jax_scan_impalanet_machado.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ppo_atari_envpool_async_jax_scan_impalanet_machado]"
python -m cleanrl_utils.enjoy --exp-name ppo_atari_envpool_async_jax_scan_impalanet_machado --env-id Tennis-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Tennis-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/ppo_atari_envpool_async_jax_scan_impalanet_machado.py
curl -OL https://huggingface.co/cleanrl/Tennis-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Tennis-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/poetry.lock
poetry install --all-extras
python ppo_atari_envpool_async_jax_scan_impalanet_machado.py --track --wandb-project-name envpool-atari --save-model --upload-model --hf-entity cleanrl --env-id Tennis-v5 --seed 1
```
# Hyperparameters
```python
{'anneal_lr': True,
'async_batch_size': 16,
'batch_size': 2048,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Tennis-v5',
'exp_name': 'ppo_atari_envpool_async_jax_scan_impalanet_machado',
'gae': True,
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1024,
'norm_adv': True,
'num_envs': 64,
'num_minibatches': 2,
'num_steps': 32,
'num_updates': 24414,
'save_model': True,
'seed': 1,
'target_kl': None,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 2,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'envpool-atari'}
```
|
ongp/swin-tiny-patch4-window7-224-finetuned-eurosat
|
ongp
| 2023-01-10T23:07:31Z | 194 | 0 |
transformers
|
[
"transformers",
"pytorch",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-01-10T23:02:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
adelc/ppo-LunarLander-v2
|
adelc
| 2023-01-10T22:33:33Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T22:33:13Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.58 +/- 19.39
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Mithul/Taxi-v3
|
Mithul
| 2023-01-10T22:27:17Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T22:27:11Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Mithul/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Qilex/dqn-SpaceInvadersNoFrameskip-v4
|
Qilex
| 2023-01-10T22:10:21Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T22:09:39Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 614.50 +/- 240.09
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Qilex -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Qilex -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Qilex
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 150000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
ljicvedera/dqn-MsPacmanNoFrameskip_1-v4
|
ljicvedera
| 2023-01-10T22:04:03Z | 6 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"MsPacmanNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T22:03:35Z |
---
library_name: stable-baselines3
tags:
- MsPacmanNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MsPacmanNoFrameskip-v4
type: MsPacmanNoFrameskip-v4
metrics:
- type: mean_reward
value: 109.00 +/- 25.87
name: mean_reward
verified: false
---
# **DQN** Agent playing **MsPacmanNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **MsPacmanNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env MsPacmanNoFrameskip-v4 -orga ljicvedera -f logs/
python -m rl_zoo3.enjoy --algo dqn --env MsPacmanNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env MsPacmanNoFrameskip-v4 -orga ljicvedera -f logs/
python -m rl_zoo3.enjoy --algo dqn --env MsPacmanNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env MsPacmanNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env MsPacmanNoFrameskip-v4 -f logs/ -orga ljicvedera
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Mithul/q-FrozenLake-v1-4x4-noSlippery
|
Mithul
| 2023-01-10T22:03:18Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T22:03:12Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Mithul/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jojeyh/xlm-roberta-base-finetuned-panx-de-fr
|
jojeyh
| 2023-01-10T21:53:53Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-01-10T21:23:22Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1656
- F1: 0.8589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2905 | 1.0 | 715 | 0.1783 | 0.8310 |
| 0.1461 | 2.0 | 1430 | 0.1600 | 0.8455 |
| 0.0948 | 3.0 | 2145 | 0.1656 | 0.8589 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Tokenizers 0.13.2
|
rahuldhodapkar/protgpt2-finetuned-sarscov2-rbd
|
rahuldhodapkar
| 2023-01-10T21:50:41Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"Text Generation",
"Primary Sequence Prediction",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-01-10T16:51:13Z |
---
license: cc-by-nc-nd-4.0
metrics:
- accuracy
tags:
- generated_from_trainer
- Text Generation
- Primary Sequence Prediction
model-index:
- name: protgpt2-finetuned-sarscov2-rbd
results: []
---
# Model Card for `protgpt2-finetuned-sarscov2-rbd`
This model is a fine-tuned version of [nferruz/ProtGPT2](https://huggingface.co/nferruz/ProtGPT2) on sequences from the NCBI Virus Data Portal.
It achieves the following results on the evaluation set:
- Loss: 1.1674
- Accuracy: 0.8883
## Model description
This model is a fine-tuned checkpoint of
[ProtGPT2](https://huggingface.co/nferruz/ProtGPT2), which was originally
trained on the UniRef50 (version 2021_04) database. For a detailed overview
of the original model configuration and architecture, please see the linked
model card, or refer to the ProtGPT2 publication.
The model was finetuned on data from the SARS-CoV-2 Spike (surface glycoprotein)
receptor binding domain (RBD).
A repository with the training scripts, train and test data partitions, as well
as evaluation code is available on GitHub at
(https://github.com/rahuldhodapkar/PredictSARSVariants).
## Intended uses & limitations
This model is intended to generate synthetic SARS-CoV-2 surface glycoprotein
(a.k.a. spike protein) sequences for the purpose of identifying meaningful
variants for characterization either experimentally or through other
*in silico* tools. These variants may be used to drive vaccine develop to
protect against never-before-seen point mutants that are probable in the future.
As this model is based on the original ProtGPT2 model, it is subject to many
of the same limitations as the base model. Any biases present in the UniRef50
dataset will also be present in the model, which may include nonuniform skew
of peptides sampled across different taxonomic clades. These limitations
should be considered when interpreting the output of this model.
## Training and evaluation data
SARS-CoV-2 spike protein sequences were obtained from the NIH Sars-CoV-2 Data Hub
accessible at
https://www.ncbi.nlm.nih.gov/labs/virus/vssi/
Note that the reference sequence for the surface glycoprotein can be found at:
https://www.ncbi.nlm.nih.gov/protein/1791269090
As the loaded ProtGPT2 model was pretrained on the
UniRef50 (version 2021_04) dataset, it cannot have contained sequencing
data that was generated after that date. Evaluations will be conducted using
SARS-CoV-2 sequences generated on or after May 2021.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.11.0
- Datasets 2.8.0
- Tokenizers 0.13.2
|
GrumpyPants/dqn-SpaceInvadersNoFrameskip-v4
|
GrumpyPants
| 2023-01-10T21:41:02Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T21:40:25Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 575.00 +/- 174.11
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga GrumpyPants -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga GrumpyPants -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga GrumpyPants
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
eliotz/Reinforce-cartpole
|
eliotz
| 2023-01-10T21:36:14Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T21:36:08Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Orahra/X23
|
Orahra
| 2023-01-10T20:46:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-01-10T20:45:18Z |
beautiful, cyberpunk, golden crown, anime boy, smart, handsome, purple lightning
|
CoreyMorris/ppo-SnowballTarget
|
CoreyMorris
| 2023-01-10T20:37:31Z | 9 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-01-10T20:36:12Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: CoreyMorris/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ezaromb/sd-class-butterflies-64
|
ezaromb
| 2023-01-10T20:12:24Z | 31 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-01-10T20:11:32Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('ezaromb/sd-class-butterflies-64')
image = pipeline().images[0]
image
```
|
0xid/ppo-SnowballTarget
|
0xid
| 2023-01-10T19:51:17Z | 9 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-01-10T19:51:10Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: 0xid/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
johko/capdec_015
|
johko
| 2023-01-10T19:33:40Z | 0 | 0 | null |
[
"Image Captioning",
"image-to-text",
"en",
"dataset:MS-COCO",
"dataset:Flickr30k",
"arxiv:2211.00575",
"license:apache-2.0",
"region:us"
] |
image-to-text
| 2022-12-19T19:35:44Z |
---
license: apache-2.0
language:
- en
pipeline_tag: image-to-text
datasets:
- MS-COCO
- Flickr30k
tags:
- Image Captioning
---
# CapDec - NoiseLevel: 0.015
## Model Description
These are model weights originally provided by the authors of the paper [Text-Only Training for Image Captioning using Noise-Injected CLIP](https://arxiv.org/pdf/2211.00575.pdf).
Their method aims to train CLIP with only text samples. Therefore they are injecting zero-mean Gaussian Noise into the text embeddings before decoding.
In their words:
*Specifically, we assume that the visual embedding corresponding to a text embedding
lies somewhere within a ball of small radius around the text embedding (see Fig. 1).
We would like all text embeddings in this ball to decode to the same caption,which should
also correspond to the visual content mapped to this ball. We implement this intuition by
adding zero-mean Gaussian noise of STD to the text embedding before decoding it.*
The "Noise Level" of 0.015 is equivalent to the Noise Variance which is the square of the STD.
The reported metrics are results of a model with a Noise Variance of 0.016, which the authors unfortunately do not provide in their repository.
This model with a Noise Variance 0.015 is the closest available pre-trained model to their best model.
## Datasets
The authors trained the model on MS-COCO and Flickr30k datasets.
## Performance
The authors don't explicitly report the performance for this NoiseLevel but it can be estimated from the following figure from the original paper:

|
jpopham91/dqn-SpaceInvadersNoFrameskip-v4
|
jpopham91
| 2023-01-10T19:30:20Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T19:29:46Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 444.50 +/- 227.41
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jpopham91 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jpopham91 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jpopham91
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
lmazzon70/videomae-base-short-finetuned-ssv2-finetuned-rwf2000-epochs8-sample8
|
lmazzon70
| 2023-01-10T19:22:54Z | 62 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-01-10T11:26:16Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-short-finetuned-ssv2-finetuned-rwf2000-epochs8-sample8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-short-finetuned-ssv2-finetuned-rwf2000-epochs8-sample8
This model is a fine-tuned version of [MCG-NJU/videomae-base-short-finetuned-ssv2](https://huggingface.co/MCG-NJU/videomae-base-short-finetuned-ssv2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2493
- Accuracy: 0.3857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 6400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6783 | 0.12 | 800 | 0.5823 | 0.8175 |
| 0.7397 | 1.12 | 1600 | 2.2365 | 0.5475 |
| 0.206 | 2.12 | 2400 | 1.4244 | 0.6375 |
| 0.0431 | 3.12 | 3200 | 0.9144 | 0.7525 |
| 0.0033 | 4.12 | 4000 | 0.7622 | 0.825 |
| 0.0011 | 5.12 | 4800 | 1.0658 | 0.775 |
| 0.001 | 6.12 | 5600 | 1.6892 | 0.6875 |
| 0.2392 | 7.12 | 6400 | 1.1574 | 0.7825 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
rishipatel92/ppo-SnowballTarget101
|
rishipatel92
| 2023-01-10T19:15:48Z | 9 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-01-10T18:40:54Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: rishipatel92/ppo-SnowballTarget101
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
truthseekah/TruthSeekah
|
truthseekah
| 2023-01-10T19:13:10Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-01-10T19:11:43Z |
---
# For reference on model card metadata, see: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
[More Information Needed]
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed]
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
[More Information Needed]
</details>
|
kadirnar/yolov8x-v8.0
|
kadirnar
| 2023-01-10T19:05:58Z | 0 | 0 | null |
[
"object-detection",
"computer-vision",
"yolov8",
"yolov5",
"dataset:detection-datasets/coco",
"license:gpl-3.0",
"region:us"
] |
object-detection
| 2023-01-10T18:53:22Z |
---
license: gpl-3.0
tags:
- object-detection
- computer-vision
- yolov8
- yolov5
datasets:
- detection-datasets/coco
---
### Model Description
[Ultralytics:](https://github.com/ultralytics/ultralytics/) YOLOv8 in PyTorch > ONNX > CoreML > TFLite]
### Installation
```
pip install ultralytics
```
### Yolov8 Inference
```python
from ultralytics import YOLO
model = YOLO('kadirnar/yolov8x-v8.0')
model.conf = conf_threshold
model.iou = iou_threshold
prediction = model.predict(image, imgsz=image_size, show=False, save=False)
```
### BibTeX Entry and Citation Info
```
```
|
kadirnar/yolov8l-v8.0
|
kadirnar
| 2023-01-10T19:05:39Z | 0 | 1 | null |
[
"object-detection",
"computer-vision",
"yolov8",
"yolov5",
"dataset:detection-datasets/coco",
"license:gpl-3.0",
"region:us"
] |
object-detection
| 2023-01-10T18:53:01Z |
---
license: gpl-3.0
tags:
- object-detection
- computer-vision
- yolov8
- yolov5
datasets:
- detection-datasets/coco
---
### Model Description
[Ultralytics:](https://github.com/ultralytics/ultralytics/) YOLOv8 in PyTorch > ONNX > CoreML > TFLite]
### Installation
```
pip install ultralytics
```
### Yolov8 Inference
```python
from ultralytics import YOLO
model = YOLO('kadirnar/yolov8l-v8.0')
model.conf = conf_threshold
model.iou = iou_threshold
prediction = model.predict(image, imgsz=image_size, show=False, save=False)
```
### BibTeX Entry and Citation Info
```
```
|
kadirnar/yolov8m-v8.0
|
kadirnar
| 2023-01-10T19:05:21Z | 0 | 0 | null |
[
"object-detection",
"computer-vision",
"yolov8",
"yolov5",
"dataset:detection-datasets/coco",
"license:gpl-3.0",
"region:us"
] |
object-detection
| 2023-01-10T18:51:51Z |
---
license: gpl-3.0
tags:
- object-detection
- computer-vision
- yolov8
- yolov5
datasets:
- detection-datasets/coco
---
### Model Description
[Ultralytics:](https://github.com/ultralytics/ultralytics/) YOLOv8 in PyTorch > ONNX > CoreML > TFLite]
### Installation
```
pip install ultralytics
```
### Yolov8 Inference
```python
from ultralytics import YOLO
model = YOLO('kadirnar/yolov8m-v8.0')
model.conf = conf_threshold
model.iou = iou_threshold
prediction = model.predict(image, imgsz=image_size, show=False, save=False)
```
### BibTeX Entry and Citation Info
```
```
|
eliotz/taxicab
|
eliotz
| 2023-01-10T19:00:27Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T19:00:22Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxicab
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="eliotz/taxicab", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
codingmoh/cat-breed-identifier-23-01
|
codingmoh
| 2023-01-10T18:54:16Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2023-01-10T18:54:07Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
juanfdangelo/ddpm-butterflies-128
|
juanfdangelo
| 2023-01-10T18:52:43Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2023-01-10T16:03:36Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/juanfdangelo/ddpm-butterflies-128/tensorboard?#scalars)
|
jinghua2tang/ppo-SnowballTarget
|
jinghua2tang
| 2023-01-10T18:50:19Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-01-10T18:50:12Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: jinghua2tang/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
BobMcDear/convnextv2_tiny_384_fcmae_in22ft1k
|
BobMcDear
| 2023-01-10T18:48:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-01-10T15:09:26Z |
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/convnextv2_nano_fcmae
|
BobMcDear
| 2023-01-10T18:48:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-01-10T15:09:24Z |
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/convnextv2_base_384_fcmae_in22ft1k
|
BobMcDear
| 2023-01-10T18:48:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-01-10T15:09:25Z |
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/convnextv2_pico_fcmae
|
BobMcDear
| 2023-01-10T18:48:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-01-10T15:09:15Z |
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/convnextv2_femto_fcmae
|
BobMcDear
| 2023-01-10T18:47:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-01-10T15:09:06Z |
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/convnextv2_femto_fcmae_ftin1k
|
BobMcDear
| 2023-01-10T18:47:43Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-01-10T15:09:13Z |
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/convnextv2_base_fcmae_ftin1k
|
BobMcDear
| 2023-01-10T18:47:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-01-10T15:09:24Z |
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/convnextv2_large_fcmae
|
BobMcDear
| 2023-01-10T18:47:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-01-10T15:09:22Z |
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
aj555/ppo-Huggy
|
aj555
| 2023-01-10T18:46:49Z | 11 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-01-10T18:46:42Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: aj555/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
BobMcDear/convnextv2_base_fcmae
|
BobMcDear
| 2023-01-10T18:46:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-01-10T15:09:07Z |
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/convnextv2_atto_fcmae_ftin1k
|
BobMcDear
| 2023-01-10T18:46:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-01-10T15:09:17Z |
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/convnextv2_tiny_fcmae
|
BobMcDear
| 2023-01-10T18:46:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-01-10T15:09:21Z |
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/convnextv2_large_fcmae_in22ft1k
|
BobMcDear
| 2023-01-10T18:46:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-01-10T15:09:19Z |
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/convnextv2_large_fcmae_ftin1k
|
BobMcDear
| 2023-01-10T18:45:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-01-10T15:09:14Z |
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/convnextv2_huge_fcmae_ftin1k
|
BobMcDear
| 2023-01-10T18:45:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-01-10T15:09:10Z |
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
sd-concepts-library/ambrose-arm-chair
|
sd-concepts-library
| 2023-01-10T18:38:48Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2023-01-10T18:38:44Z |
---
license: mit
---
### ambrose-arm-chair on Stable Diffusion
This is the `<ambrose-arm-chair>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
AWP/cat-breed-identifier
|
AWP
| 2023-01-10T18:37:19Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2023-01-10T18:37:11Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
AgentXXX/ppo-PyramidsRND
|
AgentXXX
| 2023-01-10T18:33:31Z | 12 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-01-10T18:33:24Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: AgentXXX/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
deliveroo/ppo-Huggy
|
deliveroo
| 2023-01-10T18:29:23Z | 24 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-01-10T18:29:15Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: deliveroo/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sd-concepts-library/minecraft-concept-art
|
sd-concepts-library
| 2023-01-10T18:25:05Z | 0 | 14 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-10T18:21:29Z |
---
license: mit
inference: true
---
### minecraft-concept-art on Stable Diffusion
This is the `<concept>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




|
cleanrl/Surround-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1
|
cleanrl
| 2023-01-10T18:12:34Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Surround-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T18:12:30Z |
---
tags:
- Surround-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Surround-v5
type: Surround-v5
metrics:
- type: mean_reward
value: 2.20 +/- 3.12
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Surround-v5**
This is a trained model of a PPO agent playing Surround-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppo_atari_envpool_async_jax_scan_impalanet_machado.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ppo_atari_envpool_async_jax_scan_impalanet_machado]"
python -m cleanrl_utils.enjoy --exp-name ppo_atari_envpool_async_jax_scan_impalanet_machado --env-id Surround-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Surround-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/ppo_atari_envpool_async_jax_scan_impalanet_machado.py
curl -OL https://huggingface.co/cleanrl/Surround-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Surround-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/poetry.lock
poetry install --all-extras
python ppo_atari_envpool_async_jax_scan_impalanet_machado.py --track --wandb-project-name envpool-atari --save-model --upload-model --hf-entity cleanrl --env-id Surround-v5 --seed 1
```
# Hyperparameters
```python
{'anneal_lr': True,
'async_batch_size': 16,
'batch_size': 2048,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Surround-v5',
'exp_name': 'ppo_atari_envpool_async_jax_scan_impalanet_machado',
'gae': True,
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1024,
'norm_adv': True,
'num_envs': 64,
'num_minibatches': 2,
'num_steps': 32,
'num_updates': 24414,
'save_model': True,
'seed': 1,
'target_kl': None,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 2,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'envpool-atari'}
```
|
marianokamp/dqn-SpaceInvadersNoFrameskip-v4
|
marianokamp
| 2023-01-10T18:09:30Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T18:08:52Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 627.00 +/- 170.77
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga marianokamp -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga marianokamp -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga marianokamp
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
mus-shd/ppo-Huggy
|
mus-shd
| 2023-01-10T18:03:48Z | 14 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-01-10T18:03:40Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: mus-shd/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
nepp1d0/prot_bert_classification_finetuned_karolina_es_20e
|
nepp1d0
| 2023-01-10T18:00:18Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-10T17:11:04Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: prot_bert_classification_finetuned_karolina_es_20e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prot_bert_classification_finetuned_karolina_es_20e
This model is a fine-tuned version of [nepp1d0/prot_bert-finetuned-smiles-bindingDB](https://huggingface.co/nepp1d0/prot_bert-finetuned-smiles-bindingDB) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6763
- Accuracy: 0.92
- F1: 0.9583
- Precision: 1.0
- Recall: 0.92
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 64
- eval_batch_size: 64
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 2 | 0.7084 | 0.02 | 0.0392 | 1.0 | 0.02 |
| No log | 2.0 | 4 | 0.7082 | 0.02 | 0.0392 | 1.0 | 0.02 |
| No log | 3.0 | 6 | 0.7078 | 0.04 | 0.0769 | 1.0 | 0.04 |
| No log | 4.0 | 8 | 0.7072 | 0.04 | 0.0769 | 1.0 | 0.04 |
| No log | 5.0 | 10 | 0.7065 | 0.04 | 0.0769 | 1.0 | 0.04 |
| No log | 6.0 | 12 | 0.7055 | 0.04 | 0.0769 | 1.0 | 0.04 |
| No log | 7.0 | 14 | 0.7044 | 0.04 | 0.0769 | 1.0 | 0.04 |
| No log | 8.0 | 16 | 0.7031 | 0.06 | 0.1132 | 1.0 | 0.06 |
| No log | 9.0 | 18 | 0.7017 | 0.12 | 0.2143 | 1.0 | 0.12 |
| No log | 10.0 | 20 | 0.6999 | 0.2 | 0.3333 | 1.0 | 0.2 |
| No log | 11.0 | 22 | 0.6981 | 0.22 | 0.3607 | 1.0 | 0.22 |
| No log | 12.0 | 24 | 0.6962 | 0.22 | 0.3607 | 1.0 | 0.22 |
| No log | 13.0 | 26 | 0.6941 | 0.24 | 0.3871 | 1.0 | 0.24 |
| No log | 14.0 | 28 | 0.6917 | 0.44 | 0.6111 | 1.0 | 0.44 |
| No log | 15.0 | 30 | 0.6893 | 0.58 | 0.7342 | 1.0 | 0.58 |
| No log | 16.0 | 32 | 0.6869 | 0.76 | 0.8636 | 1.0 | 0.76 |
| No log | 17.0 | 34 | 0.6842 | 0.88 | 0.9362 | 1.0 | 0.88 |
| No log | 18.0 | 36 | 0.6816 | 0.9 | 0.9474 | 1.0 | 0.9 |
| No log | 19.0 | 38 | 0.6789 | 0.92 | 0.9583 | 1.0 | 0.92 |
| No log | 20.0 | 40 | 0.6763 | 0.92 | 0.9583 | 1.0 | 0.92 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.11.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
gday/ppo-LunarLander-v2
|
gday
| 2023-01-10T17:49:55Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T17:49:30Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.34 +/- 21.46
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AgentXXX/ppo-SnowballTarget
|
AgentXXX
| 2023-01-10T17:23:10Z | 9 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-01-10T17:23:02Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: AgentXXX/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Volodymyr/sd-class-butterflies-32
|
Volodymyr
| 2023-01-10T17:21:18Z | 30 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-01-10T17:20:20Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Volodymyr/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
Clawoo/rnd-PyramidsTraining
|
Clawoo
| 2023-01-10T17:10:55Z | 10 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-01-10T17:10:49Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: Clawoo/rnd-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Ineract/bert-large-uncased-whole-word-masking-finetuned-policy-number
|
Ineract
| 2023-01-10T16:43:20Z | 119 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:policies",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-01-10T16:24:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- policies
model-index:
- name: bert-large-uncased-whole-word-masking-finetuned-policy-number
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-whole-word-masking-finetuned-policy-number
This model is a fine-tuned version of [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad) on the policies dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 282 | 0.0031 |
| 0.0049 | 2.0 | 564 | 0.0000 |
| 0.0049 | 3.0 | 846 | 0.0000 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
custom-diffusion-library/cat
|
custom-diffusion-library
| 2023-01-10T16:31:12Z | 6 | 1 |
diffusers
|
[
"diffusers",
"pytorch",
"stable-diffusion",
"stable-diffusion-diffusers",
"license:other",
"region:us"
] | null | 2022-12-19T15:32:13Z |
---
license: other
tags:
- pytorch
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
---
# This is a Custom Diffusion model fine-tuned from the Stable Diffusion v1-4.
[Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion/index.html) allows you to fine-tune text-to-image diffusion models, such as Stable Diffusion, given a few images of a new concept (~4-20).
Here we give an example model fine-tuned using 5 images of a cat downloaded from UnSplash. The example code of inference is shown below.
## Example code of inference
```
git clone https://github.com/adobe-research/custom-diffusion
cd custom-diffusion
```
```python
from diffusers import StableDiffusionPipeline
from src import diffuser_training
device = 'cuda'
model_id = "CompVis/stable-diffusion-v1-4"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to(device)
diffuser_training.load_model(pipe.text_encoder, pipe.tokenizer, pipe.unet, 'cat.bin')
prompt = "<new1> cat swimming in a pool"
images = pipe(prompt, num_inference_steps=200, guidance_scale=6., eta=1.).images
```
<center>
<img src="https://huggingface.co/custom-diffusion-library/cat/resolve/main/cat.png" width="600" align="center" >
</center>
|
cewinharhar/prot_t5_xl_alphaKGD_bacteriaMiddle
|
cewinharhar
| 2023-01-10T16:29:59Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-01-10T15:16:44Z |
---
tags:
- generated_from_trainer
model-index:
- name: prot_t5_xl_alphaKGD_bacteriaMiddle
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prot_t5_xl_alphaKGD_bacteriaMiddle
This model is a fine-tuned version of [Rostlab/prot_t5_xl_uniref50](https://huggingface.co/Rostlab/prot_t5_xl_uniref50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 211 | 2.8487 |
| No log | 2.0 | 422 | 2.8389 |
| 3.2264 | 3.0 | 633 | 2.8333 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1
- Datasets 2.7.1
- Tokenizers 0.11.0
|
m3kkasi/distilbert-cased-finetuned-newsqa
|
m3kkasi
| 2023-01-10T16:26:45Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-01-07T13:39:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-cased-finetuned-newsqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-cased-finetuned-newsqa
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
chavicoski/Reinforce_Pixelcopter-PLE-v0
|
chavicoski
| 2023-01-10T16:20:07Z | 0 | 0 | null |
[
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"Pixelcopter-PLE-v0",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T16:17:10Z |
---
tags:
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
- Pixelcopter-PLE-v0
model-index:
- name: Reinforce_Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 73.39 +/- 55.42
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Closen/q-Taxi-v3
|
Closen
| 2023-01-10T16:14:58Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T16:09:33Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Closen/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
edbeeching/rl_course_vizdoom_health_gathering_supreme
|
edbeeching
| 2023-01-10T16:06:58Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T15:21:04Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 8.07 +/- 1.90
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r edbeeching/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.8.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.8.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
iapetusLatent/Vega-0.2.4-preview
|
iapetusLatent
| 2023-01-10T16:04:13Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-01-10T14:10:23Z |
---
license: creativeml-openrail-m
---
|
Deisler/q-Taxi-v3-25x5x4-6-35000
|
Deisler
| 2023-01-10T15:59:00Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T15:30:40Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-25x5x4-6-35000
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Deisler/q-Taxi-v3-25x5x4-6-35000", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Ineract/distilbert-base-uncased-finetuned-policies
|
Ineract
| 2023-01-10T15:41:44Z | 127 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:policies",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-01-09T22:08:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- policies
model-index:
- name: distilbert-base-uncased-finetuned-policies
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-policies
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the policies dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4208 | 1.0 | 759 | 0.0183 |
| 0.0115 | 2.0 | 1518 | 0.0202 |
| 0.0048 | 3.0 | 2277 | 0.0193 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
poloclub/RobArch
|
poloclub
| 2023-01-10T15:21:39Z | 0 | 2 | null |
[
"adversarial machine learning",
"dataset:imagenet-1k",
"arxiv:2301.03110",
"license:mit",
"region:us"
] | null | 2023-01-09T21:02:13Z |
---
license: mit
datasets:
- imagenet-1k
metrics:
- accuracy
tags:
- adversarial machine learning
---
## RobArch: Designing Robust Architectures against Adversarial Attacks
*ShengYun Peng, Weilin Xu, Cory Cornelius, Kevin Li, Rahul Duggal, Duen Horng Chau, Jason Martin*
Check https://github.com/ShengYun-Peng/RobArch for the complete code.
### Abstract
Adversarial Training is the most effective approach for improving the robustness of Deep Neural Networks (DNNs). However, compared to the large body of research in optimizing the adversarial training process, there are few investigations into how architecture components affect robustness, and they rarely constrain model capacity. Thus, it is unclear where robustness precisely comes from. In this work, we present the first large-scale systematic study on the robustness of DNN architecture components under fixed parameter budgets. Through our investigation, we distill 18 actionable robust network design guidelines that empower model developers to gain deep insights. We demonstrate these guidelines' effectiveness by introducing the novel Robust Architecture (RobArch) model that instantiates the guidelines to build a family of top-performing models across parameter capacities against strong adversarial attacks. RobArch achieves the new state-of-the-art AutoAttack accuracy on the RobustBench ImageNet leaderboard.
### Prerequisites
1. Register Weights & Biases [account](https://wandb.ai/site)
2. Prepare ImageNet via [Fast AT - Installation step 3 & 4](https://github.com/locuslab/fast_adversarial/tree/master/ImageNet)
> Run step 4 only if you want to use Fast-AT.
3. Set up venv:
```bash
make .venv_done
```
### Training
Fast-AT is much faster than standard PGD AT. For RobArch-S, Fast-AT takes ~1.5 days on 2 Nvidia A100s, but ~5 days on 4 Nvidia A100s.
#### Torchvision models - Fast AT (e.g., ResNet-50)
```bash
make BASE=<imagenet root dir> WANDB_ACCOUNT=<name> experiments/Torch_ResNet50/.done_test_pgd
```
If you want to test other off-the-shelf models in [torchvision](https://pytorch.org/vision/stable/models.html#classification), add the model name in [MODEL.mk](MODEL.mk), and create a new make target by following other ResNets/WideResNets in [Makefile](Makefile).
#### RobArch - Fast AT (e.g., RobArch-S)
```bash
make BASE=<imagenet root dir> WANDB_ACCOUNT=<name> experiments/RobArch_S/.done_test_pgd
```
#### RobArch - Standard PGD AT (e.g., RobArch-S)
```bash
# Training
make BASE=<imagenet root dir> WANDB_ACCOUNT=<name> experiments/PGDAT_RobArch_S/.done_train
# Evaluation on PGD
make BASE=<imagenet root dir> WANDB_ACCOUNT=<name> experiments/PGDAT_RobArch_S/.done_test_pgd
# Evaluation on AutoAttack
make BASE=<imagenet root dir> WANDB_ACCOUNT=<name> experiments/PGDAT_RobArch_S/.done_test_aa
# Pretrained models evaluated on AutoAttack
make BASE=<imagenet root dir> WANDB_ACCOUNT=<name> experiments/PGDAT_RobArch_S/.done_test_pretrained
```
### Pretrained models
- ImageNet $\ell_\infty$-norm
| Architecture | #Param | Natural | AutoAttack | PGD10-4 | PGD50-4 | PGD100-4 | PGD100-2 | PGD100-8 |
| :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: |
| [RobArch-S](https://huggingface.co/poloclub/RobArch/resolve/main/pretrained/robarch_s.pt) | 26M | 70.17% | 44.14% | 48.19% | 47.78% | 47.77% | 60.06% | 21.77% |
| [RobArch-M](https://huggingface.co/poloclub/RobArch/resolve/main/pretrained/robarch_m.pt) | 46M | 71.88% | 46.26% | 49.84% | 49.32% | 49.30% | 61.89% | 23.01% |
| [RobArch-L](https://huggingface.co/poloclub/RobArch/resolve/main/pretrained/robarch_l.pt) | 104M | 73.44% | 48.94% | 51.72% | 51.04% | 51.03% | 63.49% | 25.31% |
### Citation
```bibtex
@misc{peng2023robarch,
title={RobArch: Designing Robust Architectures against Adversarial Attacks},
author={ShengYun Peng and Weilin Xu and Cory Cornelius and Kevin Li and Rahul Duggal and Duen Horng Chau and Jason Martin},
year={2023},
eprint={2301.03110},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
hr16/Miwano-Rag-LoRA
|
hr16
| 2023-01-10T15:15:08Z | 0 | 3 | null |
[
"stable-diffusion",
"safetensors",
"LoRA",
"Low-rank Adaptation",
"anime",
"text-to-image",
"en",
"dataset:hr16/Miwano-Rag",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-01-10T12:17:14Z |
---
license: creativeml-openrail-m
datasets:
- hr16/Miwano-Rag
language:
- en
pipeline_tag: text-to-image
tags:
- stable-diffusion
- safetensors
- LoRA
- Low-rank Adaptation
- anime
---
The files in this model repo are LoRA embeddings created using [Kanianime](https://huggingface.co/Rasgeath/self_made_sauce/blob/main/Kani-anime-pruned.ckpt) by [Rasgeath](https://huggingface.co/Rasgeath) as base model.
Use something like `masterpiece, best quality, 1girl, art by Miwano-Rag` as prompt.
I'm too lazy to write a README lol.
|
Lilya/gpt2-ner-invoiceSenderRecipient_all_inv_03_01
|
Lilya
| 2023-01-10T15:06:25Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-01-03T19:49:36Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: gpt2-ner-invoiceSenderRecipient_all_inv_03_01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-ner-invoiceSenderRecipient_all_inv_03_01
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0307
- Precision: 0.7932
- Recall: 0.8488
- F1: 0.8201
- Accuracy: 0.9895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0363 | 0.01 | 500 | 0.0338 | 0.7846 | 0.7969 | 0.7907 | 0.9884 |
| 0.0392 | 0.02 | 1000 | 0.0346 | 0.7665 | 0.8211 | 0.7929 | 0.9881 |
| 0.0363 | 0.04 | 1500 | 0.0347 | 0.7701 | 0.8075 | 0.7884 | 0.9880 |
| 0.0396 | 0.05 | 2000 | 0.0347 | 0.7454 | 0.8375 | 0.7888 | 0.9879 |
| 0.0366 | 0.06 | 2500 | 0.0350 | 0.7519 | 0.8345 | 0.7911 | 0.9879 |
| 0.0382 | 0.07 | 3000 | 0.0356 | 0.7500 | 0.8434 | 0.7939 | 0.9877 |
| 0.0424 | 0.09 | 3500 | 0.0358 | 0.7517 | 0.8287 | 0.7883 | 0.9877 |
| 0.0385 | 0.1 | 4000 | 0.0352 | 0.7605 | 0.8225 | 0.7903 | 0.9880 |
| 0.0382 | 0.11 | 4500 | 0.0361 | 0.7494 | 0.8159 | 0.7813 | 0.9874 |
| 0.0372 | 0.12 | 5000 | 0.0345 | 0.7817 | 0.8044 | 0.7929 | 0.9885 |
| 0.0377 | 0.14 | 5500 | 0.0346 | 0.7749 | 0.8238 | 0.7986 | 0.9884 |
| 0.0383 | 0.15 | 6000 | 0.0359 | 0.7568 | 0.8341 | 0.7936 | 0.9879 |
| 0.0372 | 0.16 | 6500 | 0.0356 | 0.7548 | 0.8356 | 0.7932 | 0.9879 |
| 0.0371 | 0.17 | 7000 | 0.0352 | 0.7540 | 0.8477 | 0.7981 | 0.9880 |
| 0.0368 | 0.19 | 7500 | 0.0349 | 0.7662 | 0.8310 | 0.7973 | 0.9881 |
| 0.0388 | 0.2 | 8000 | 0.0339 | 0.7648 | 0.8336 | 0.7977 | 0.9883 |
| 0.0368 | 0.21 | 8500 | 0.0336 | 0.7729 | 0.8305 | 0.8006 | 0.9886 |
| 0.0389 | 0.22 | 9000 | 0.0340 | 0.7750 | 0.8208 | 0.7972 | 0.9884 |
| 0.0384 | 0.24 | 9500 | 0.0349 | 0.7549 | 0.8499 | 0.7996 | 0.9880 |
| 0.0376 | 0.25 | 10000 | 0.0358 | 0.7531 | 0.8390 | 0.7938 | 0.9875 |
| 0.0354 | 0.26 | 10500 | 0.0346 | 0.7650 | 0.8318 | 0.7970 | 0.9882 |
| 0.0358 | 0.27 | 11000 | 0.0338 | 0.7694 | 0.8397 | 0.8030 | 0.9886 |
| 0.0389 | 0.28 | 11500 | 0.0341 | 0.7586 | 0.8502 | 0.8018 | 0.9882 |
| 0.0383 | 0.3 | 12000 | 0.0342 | 0.7688 | 0.8275 | 0.7971 | 0.9881 |
| 0.0355 | 0.31 | 12500 | 0.0337 | 0.7783 | 0.8281 | 0.8024 | 0.9885 |
| 0.0372 | 0.32 | 13000 | 0.0338 | 0.7703 | 0.8399 | 0.8036 | 0.9884 |
| 0.0369 | 0.33 | 13500 | 0.0331 | 0.7683 | 0.8427 | 0.8038 | 0.9886 |
| 0.0361 | 0.35 | 14000 | 0.0336 | 0.7699 | 0.8322 | 0.7999 | 0.9885 |
| 0.0361 | 0.36 | 14500 | 0.0336 | 0.7735 | 0.8390 | 0.8049 | 0.9885 |
| 0.0372 | 0.37 | 15000 | 0.0333 | 0.7747 | 0.8343 | 0.8034 | 0.9887 |
| 0.0366 | 0.38 | 15500 | 0.0343 | 0.7646 | 0.8468 | 0.8036 | 0.9883 |
| 0.0345 | 0.4 | 16000 | 0.0333 | 0.7790 | 0.8334 | 0.8053 | 0.9887 |
| 0.0363 | 0.41 | 16500 | 0.0329 | 0.7783 | 0.8301 | 0.8034 | 0.9887 |
| 0.0348 | 0.42 | 17000 | 0.0341 | 0.7626 | 0.8533 | 0.8054 | 0.9884 |
| 0.0391 | 0.43 | 17500 | 0.0324 | 0.7873 | 0.8295 | 0.8079 | 0.9889 |
| 0.0344 | 0.45 | 18000 | 0.0334 | 0.7769 | 0.8369 | 0.8058 | 0.9887 |
| 0.0378 | 0.46 | 18500 | 0.0337 | 0.7741 | 0.8394 | 0.8054 | 0.9886 |
| 0.035 | 0.47 | 19000 | 0.0328 | 0.7827 | 0.8323 | 0.8067 | 0.9888 |
| 0.0351 | 0.48 | 19500 | 0.0327 | 0.7815 | 0.8371 | 0.8083 | 0.9889 |
| 0.037 | 0.5 | 20000 | 0.0328 | 0.7793 | 0.8388 | 0.8079 | 0.9888 |
| 0.0346 | 0.51 | 20500 | 0.0325 | 0.7804 | 0.8416 | 0.8099 | 0.9890 |
| 0.0364 | 0.52 | 21000 | 0.0323 | 0.7861 | 0.8339 | 0.8093 | 0.9889 |
| 0.0356 | 0.53 | 21500 | 0.0327 | 0.7729 | 0.8510 | 0.8101 | 0.9889 |
| 0.0346 | 0.54 | 22000 | 0.0325 | 0.7791 | 0.8407 | 0.8087 | 0.9889 |
| 0.0342 | 0.56 | 22500 | 0.0334 | 0.7790 | 0.8443 | 0.8104 | 0.9889 |
| 0.0368 | 0.57 | 23000 | 0.0322 | 0.7869 | 0.8323 | 0.8089 | 0.9890 |
| 0.0371 | 0.58 | 23500 | 0.0320 | 0.7890 | 0.8356 | 0.8116 | 0.9891 |
| 0.0344 | 0.59 | 24000 | 0.0321 | 0.7910 | 0.8321 | 0.8110 | 0.9892 |
| 0.0342 | 0.61 | 24500 | 0.0319 | 0.7881 | 0.8356 | 0.8111 | 0.9892 |
| 0.0339 | 0.62 | 25000 | 0.0320 | 0.7889 | 0.8317 | 0.8097 | 0.9892 |
| 0.0347 | 0.63 | 25500 | 0.0316 | 0.7909 | 0.8347 | 0.8122 | 0.9892 |
| 0.034 | 0.64 | 26000 | 0.0318 | 0.7887 | 0.8324 | 0.8100 | 0.9891 |
| 0.0347 | 0.66 | 26500 | 0.0317 | 0.7791 | 0.8525 | 0.8141 | 0.9891 |
| 0.0345 | 0.67 | 27000 | 0.0318 | 0.7870 | 0.8384 | 0.8119 | 0.9892 |
| 0.0347 | 0.68 | 27500 | 0.0317 | 0.7903 | 0.8426 | 0.8157 | 0.9893 |
| 0.0371 | 0.69 | 28000 | 0.0311 | 0.7965 | 0.8332 | 0.8144 | 0.9894 |
| 0.0338 | 0.71 | 28500 | 0.0316 | 0.7863 | 0.8442 | 0.8142 | 0.9892 |
| 0.0352 | 0.72 | 29000 | 0.0315 | 0.7810 | 0.8537 | 0.8157 | 0.9892 |
| 0.0344 | 0.73 | 29500 | 0.0314 | 0.7953 | 0.8353 | 0.8148 | 0.9894 |
| 0.0322 | 0.74 | 30000 | 0.0320 | 0.7836 | 0.8449 | 0.8131 | 0.9891 |
| 0.0355 | 0.76 | 30500 | 0.0312 | 0.7877 | 0.8480 | 0.8167 | 0.9894 |
| 0.035 | 0.77 | 31000 | 0.0313 | 0.7864 | 0.8504 | 0.8171 | 0.9893 |
| 0.0346 | 0.78 | 31500 | 0.0310 | 0.7931 | 0.8424 | 0.8170 | 0.9895 |
| 0.0339 | 0.79 | 32000 | 0.0316 | 0.7857 | 0.8501 | 0.8166 | 0.9893 |
| 0.033 | 0.8 | 32500 | 0.0311 | 0.7975 | 0.8406 | 0.8185 | 0.9895 |
| 0.0337 | 0.82 | 33000 | 0.0314 | 0.7886 | 0.8457 | 0.8162 | 0.9894 |
| 0.0357 | 0.83 | 33500 | 0.0311 | 0.7923 | 0.8437 | 0.8172 | 0.9894 |
| 0.0348 | 0.84 | 34000 | 0.0312 | 0.7909 | 0.8490 | 0.8189 | 0.9894 |
| 0.0343 | 0.85 | 34500 | 0.0311 | 0.7856 | 0.8528 | 0.8179 | 0.9893 |
| 0.0323 | 0.87 | 35000 | 0.0311 | 0.7884 | 0.8505 | 0.8183 | 0.9894 |
| 0.0329 | 0.88 | 35500 | 0.0307 | 0.7981 | 0.8399 | 0.8185 | 0.9896 |
| 0.0324 | 0.89 | 36000 | 0.0313 | 0.7830 | 0.8576 | 0.8186 | 0.9893 |
| 0.0336 | 0.9 | 36500 | 0.0312 | 0.7836 | 0.8566 | 0.8185 | 0.9893 |
| 0.0327 | 0.92 | 37000 | 0.0309 | 0.7887 | 0.8501 | 0.8182 | 0.9895 |
| 0.0338 | 0.93 | 37500 | 0.0312 | 0.7887 | 0.8514 | 0.8188 | 0.9894 |
| 0.0327 | 0.94 | 38000 | 0.0311 | 0.7873 | 0.8534 | 0.8190 | 0.9894 |
| 0.0326 | 0.95 | 38500 | 0.0308 | 0.7953 | 0.8459 | 0.8198 | 0.9895 |
| 0.0338 | 0.97 | 39000 | 0.0307 | 0.7932 | 0.8488 | 0.8201 | 0.9895 |
| 0.0354 | 0.98 | 39500 | 0.0308 | 0.7916 | 0.8502 | 0.8198 | 0.9895 |
| 0.0313 | 0.99 | 40000 | 0.0309 | 0.7897 | 0.8523 | 0.8198 | 0.9895 |
### Framework versions
- Transformers 4.22.0
- Pytorch 1.10.0
- Tokenizers 0.12.1
|
aalsinat/Reinforce
|
aalsinat
| 2023-01-10T14:55:19Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T14:54:41Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
codingmoh/cat-identifier
|
codingmoh
| 2023-01-10T14:51:56Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2023-01-10T14:51:40Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
Arjun12/ppo-LunarLander-v2
|
Arjun12
| 2023-01-10T14:42:01Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T14:41:38Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 245.36 +/- 39.74
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tayfen/Reinforce_px_copter_baseline_2
|
tayfen
| 2023-01-10T14:23:43Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T13:54:34Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce_px_copter_baseline_2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 56.90 +/- 31.17
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AV10/distilbert-base-uncased-finetuned-emotion
|
AV10
| 2023-01-10T14:15:29Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-10T13:20:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: train
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.936
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1529
- F1 Score: 0.9362
- Accuracy: 0.936
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 0.5258 | 1.0 | 250 | 0.1909 | 0.9255 | 0.9265 |
| 0.145 | 2.0 | 500 | 0.1529 | 0.9362 | 0.936 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Sevenlee/kkk
|
Sevenlee
| 2023-01-10T13:05:50Z | 0 | 0 |
allennlp
|
[
"allennlp",
"chemistry",
"image-segmentation",
"ab",
"dataset:fka/awesome-chatgpt-prompts",
"license:apache-2.0",
"region:us"
] |
image-segmentation
| 2023-01-09T08:30:06Z |
---
license: apache-2.0
datasets:
- fka/awesome-chatgpt-prompts
language:
- ab
metrics:
- accuracy 100
- bertscore
library_name: allennlp
pipeline_tag: image-segmentation
tags:
- chemistry
---
|
Pitak/Tak-Hug
|
Pitak
| 2023-01-10T13:00:12Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-01-10T13:00:12Z |
---
license: bigscience-openrail-m
---
|
ayor-dns/RL_course
|
ayor-dns
| 2023-01-10T12:50:37Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T11:37:48Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 271.42 +/- 22.50
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
misza222/Reinforce-CartPole
|
misza222
| 2023-01-10T12:30:45Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-09T11:14:46Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
tayfen/Reinforce_cartpole_baseline
|
tayfen
| 2023-01-10T12:01:23Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T12:01:08Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce_cartpole_baseline
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
aj555/ppo-LunarLander-v2-first-run
|
aj555
| 2023-01-10T11:57:52Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T11:57:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 306.58 +/- 10.86
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sd-concepts-library/wakefit-coffee-table
|
sd-concepts-library
| 2023-01-10T11:51:11Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-01-10T11:51:07Z |
---
license: mit
---
### wakefit-coffee-table on Stable Diffusion
This is the `<wakefit-coffee-table>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:







|
FDB-BG/ppo-lunar-lander-v2
|
FDB-BG
| 2023-01-10T11:47:57Z | 5 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T09:31:40Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.27 +/- 20.67
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
rohitp1/libri-alpha-0.75-Temp-1-attention-3-layers-distil-with-6-layers-mse-take-3
|
rohitp1
| 2023-01-10T11:43:10Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-01-10T07:25:02Z |
---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: libri-alpha-0.75-Temp-1-attention-3-layers-distil-with-6-layers-mse-take-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# libri-alpha-0.75-Temp-1-attention-3-layers-distil-with-6-layers-mse-take-3
This model is a fine-tuned version of [rohitp1/libri-alpha-0.75-Temp-1-attention-3-layers-distil-with-6-layers-mse](https://huggingface.co/rohitp1/libri-alpha-0.75-Temp-1-attention-3-layers-distil-with-6-layers-mse) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 28.9263
- Wer: 0.3301
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 291.1088 | 0.22 | 400 | 28.4207 | 0.3362 |
| 284.1968 | 0.45 | 800 | 28.1458 | 0.3314 |
| 288.1414 | 0.67 | 1200 | 28.1397 | 0.3326 |
| 290.0272 | 0.9 | 1600 | 28.4186 | 0.3323 |
| 287.3224 | 1.12 | 2000 | 28.3548 | 0.3283 |
| 279.1482 | 1.35 | 2400 | 28.5373 | 0.3309 |
| 285.8217 | 1.57 | 2800 | 28.4447 | 0.3301 |
| 282.9265 | 1.79 | 3200 | 28.5379 | 0.3365 |
| 292.6254 | 2.02 | 3600 | 28.2632 | 0.3299 |
| 279.215 | 2.24 | 4000 | 28.9263 | 0.3301 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.11.0
|
ismet/flan-t5-base-finetuned-pwkp
|
ismet
| 2023-01-10T11:18:00Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"simplification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-01-09T12:49:47Z |
---
license: apache-2.0
tags:
- simplification
- generated_from_trainer
metrics:
- sacrebleu
model-index:
- name: flan-t5-base-finetuned-pwkp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-finetuned-pwkp
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9315
- Sacrebleu: 41.2105
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 30
- eval_batch_size: 30
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Sacrebleu |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|
| 1.0683 | 1.0 | 3421 | 0.9984 | 40.9399 |
| 0.9748 | 2.0 | 6842 | 0.9584 | 41.0858 |
| 0.9279 | 3.0 | 10263 | 0.9433 | 41.1863 |
| 0.9025 | 4.0 | 13684 | 0.9315 | 41.2105 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
tangoqash/SAM
|
tangoqash
| 2023-01-10T11:08:04Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-10T10:53:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: SAM
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value:
accuracy: 0.8733333333333333
- name: F1
type: f1
value: 0.8741721854304636
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SAM
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3061
- Accuracy: {'accuracy': 0.8733333333333333}
- F1: 0.8742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
tomercagan/ppo-LunarLander-v2
|
tomercagan
| 2023-01-10T10:29:45Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T09:03:14Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.35 +/- 17.45
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
RisiPisi/lunarlander
|
RisiPisi
| 2023-01-10T10:15:37Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T10:15:14Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.11 +/- 13.25
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
RegisGraptin/dqn-SpaceInvadersNoFrameskip-v4
|
RegisGraptin
| 2023-01-10T10:11:15Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-31T12:42:10Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 692.00 +/- 164.29
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga RegisGraptin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga RegisGraptin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga RegisGraptin
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1900000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
bitextor/bicleaner-ai-full-en-sl
|
bitextor
| 2023-01-10T10:10:22Z | 39 | 0 |
transformers
|
[
"transformers",
"tf",
"xlm-roberta",
"bicleaner-ai",
"en",
"sl",
"multilingual",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] | null | 2022-12-20T16:43:06Z |
---
language:
- en
- sl
- multilingual
license: gpl-3.0
tags:
- bicleaner-ai
tasks:
- text-classification
---
# Bicleaner AI full model for en-sl
Bicleaner AI is a tool that aims at detecting noisy sentence pairs in a parallel corpus. It
indicates the likelihood of a pair of sentences being mutual translations (with a value near to 1) or not (with a value near to 0).
Sentence pairs considered very noisy are scored with 0.
Find out at our repository for further instructions on how to use it: https://github.com/bitextor/bicleaner-ai
|
bitextor/bicleaner-ai-full-en-sq
|
bitextor
| 2023-01-10T10:10:15Z | 35 | 2 |
transformers
|
[
"transformers",
"tf",
"xlm-roberta",
"bicleaner-ai",
"en",
"sq",
"multilingual",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] | null | 2022-12-20T16:47:22Z |
---
language:
- en
- sq
- multilingual
license: gpl-3.0
tags:
- bicleaner-ai
tasks:
- text-classification
---
# Bicleaner AI full model for en-sq
Bicleaner AI is a tool that aims at detecting noisy sentence pairs in a parallel corpus. It
indicates the likelihood of a pair of sentences being mutual translations (with a value near to 1) or not (with a value near to 0).
Sentence pairs considered very noisy are scored with 0.
Find out at our repository for further instructions on how to use it: https://github.com/bitextor/bicleaner-ai
|
bitextor/bicleaner-ai-full-en-fr
|
bitextor
| 2023-01-10T10:10:06Z | 32 | 1 |
transformers
|
[
"transformers",
"tf",
"xlm-roberta",
"bicleaner-ai",
"en",
"fr",
"multilingual",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] | null | 2022-12-20T16:53:16Z |
---
language:
- en
- fr
- multilingual
license: gpl-3.0
tags:
- bicleaner-ai
tasks:
- text-classification
---
# Bicleaner AI full model for en-fr
Bicleaner AI is a tool that aims at detecting noisy sentence pairs in a parallel corpus. It
indicates the likelihood of a pair of sentences being mutual translations (with a value near to 1) or not (with a value near to 0).
Sentence pairs considered very noisy are scored with 0.
Find out at our repository for further instructions on how to use it: https://github.com/bitextor/bicleaner-ai
|
AhiyaB/mt5-small-finetuned-Big-Patent-h
|
AhiyaB
| 2023-01-10T09:57:32Z | 108 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:big_patent",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-12-01T13:16:45Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- big_patent
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-Big-Patent-h
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: big_patent
type: big_patent
config: h
split: train
args: h
metrics:
- name: Rouge1
type: rouge
value: 33.9091
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-Big-Patent-h
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the big_patent dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2622
- Rouge1: 33.9091
- Rouge2: 14.1731
- Rougel: 30.105
- Rougelsum: 30.3666
## Model description
In this project, we fine-tuned mT5small, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages.
The model was fine-tuned on the electric patent corpus using a variety of techniques, including transfer learning, data augmentation, and hyperparameter tuning.
## Intended uses & limitations
The fine-tuned model showed significant improvements in performance on the electric patent-specific tasks compared to the original pre-trained model.
Note: This project is suitable for researchers who are working on electric patent, as it's fine-tuned on electric patents and it can be used for related NLP problems for electric patent and electric patent research.
## Training and evaluation data
A subset of electric patents were used to fine-tune the model.
The fine-tuned model was evaluated using the ROUGE metric on a variety of natural language processing tasks specific to the patent domain, including, named entity recognition, and summarization.
## Training procedure
The model was fine-tuned on the electric patent corpus using a variety of techniques, including transfer learning, data augmentation, and hyperparameter tuning.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.5817 | 1.0 | 1071 | 2.3830 | 32.8521 | 13.2087 | 29.5594 | 29.7744 |
| 2.5657 | 2.0 | 2142 | 2.3345 | 33.9434 | 14.0573 | 30.0135 | 30.2533 |
| 2.4915 | 3.0 | 3213 | 2.2761 | 33.2033 | 13.2053 | 29.5126 | 29.8023 |
| 2.4365 | 4.0 | 4284 | 2.3041 | 33.8649 | 13.6629 | 30.0377 | 30.257 |
| 2.3952 | 5.0 | 5355 | 2.2722 | 33.9208 | 13.8018 | 30.1035 | 30.3432 |
| 2.3628 | 6.0 | 6426 | 2.2850 | 33.883 | 13.9537 | 30.0579 | 30.2417 |
| 2.3474 | 7.0 | 7497 | 2.2858 | 33.7201 | 14.0808 | 30.0762 | 30.255 |
| 2.331 | 8.0 | 8568 | 2.2622 | 33.9091 | 14.1731 | 30.105 | 30.3666 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Scrwed/Reinforce-cartpole
|
Scrwed
| 2023-01-10T09:46:29Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T09:46:14Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
cardiffnlp/xlm-twitter-politics-sentiment
|
cardiffnlp
| 2023-01-10T09:42:48Z | 242 | 10 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"xlm-roberta",
"text-classification",
"generated_from_keras_callback",
"arxiv:2104.12250",
"arxiv:2202.00396",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-02T00:34:22Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: XLM-T-Sent-Politics
results: []
---
# XLM-T-Sent-Politics
This is an "extension" of the multilingual `twitter-xlm-roberta-base-sentiment` model ([model](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment), [original paper](https://arxiv.org/abs/2104.12250)) with a focus on sentiment from politicians' tweets. The original sentiment fine-tuning was done on 8 languages (Ar, En, Fr, De, Hi, It, Sp, Pt) but further training was done using tweets from Members of Parliament from UK (English), Spain (Spanish) and Greece (Greek).
- Reference Paper: [Politics, Sentiment and Virality: A Large-Scale Multilingual Twitter Analysis in Greece, Spain and United Kingdom](https://arxiv.org/pdf/2202.00396.pdf).
- Git Repo: [https://github.com/cardiffnlp/politics-and-virality-twitter](https://github.com/cardiffnlp/politics-and-virality-twitter).
## Full classification example
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
MODEL = f"cardiffnlp/xlm-twitter-politics-sentiment"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Good night 😊"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
# Print labels and scores
ranking = np.argsort(scores)
for i in range(scores.shape[0]):
s = scores[ranking[i]]
print(i, s)
```
Output:
```
0 0.0048229103
1 0.03117284
2 0.9640044
```
|
NYTK/summarization-hi-bart-base-1024-hungarian
|
NYTK
| 2023-01-10T09:22:52Z | 139 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"hu",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:04Z |
---
language:
- hu
tags:
- summarization
license: apache-2.0
metrics:
- rouge
widget:
- text: >-
A Tisza-parti város állatkertjében régóta tartanak szurikátákat ( Suricata
suricatta ) , de tavaly tavaszig nem sikerült szaporítani őket , annak
ellenére , hogy tágas ház és kifutó épült számukra - közölte Veprik Róbert
igazgató . 2010-ben alakult ki az új - három Amszterdamból származó
nőstényből és egy budapesti fiatal hímből álló - csapat , amely szaporodni
kezdett . 2011-ben három , idén pedig egy utóddal örvendeztették meg a
gondozókat és az állatbarátokat . A szurikáták utódai - tizenegy hetes
vemhesség után - október és március között vakon és szőrtelenül jönnek a
világra . A kicsinyek háromhetesen bújnak elő az üregből , és nevelésükben
mindkét szülő részt vesz . A szurikátacsapatokban a család tagjai nagyon
szoros kapcsolatban állnak egymással , viszont nagyon harciasan fellépnek az
idegenekkel szemben , akár meg is ölhetik azt az állatot , amelyet
betolakodónak tekintenek . Bár a Dél-Afrikában , a Kalahári sivatagban
őshonos cibetmacskaféle ragadozókat a szegedi állatkertben természetes
élőhelyükhöz képest kevesebb veszély fenyegeti , a vadasparki erdőben
ragadozó madarak is élnek , amelyek akár zsákmányként is tekinthetnének a
szurikátákra . A szegedi csapatnál azonban szigorú őrség van , mindig lesi
valaki két lábra állva a veszélyforrásokat . Az őrszemek figyelmét még a
sárkányrepülők is felkeltik , és felbukkanásakor valamennyi egyed biztos
helyre menekül . A szurikáták a Kalahári sivatag bozótos , sziklás
területein csapatokban élnek . A 700 gramm körüli testtömegű ragadozók
rovarokkal , lárvákkal , skorpiókkal táplálkoznak , de néha elfogyasztják a
kisebb gerinceseket , tojásokat és növényi gumókat is . A nappal aktív
állatok földalatti üregrendszert ásnak , amelynek több bejárata is van . Ha
a szurikáták idegen csapattal vagy ragadozóval kerülnek szembe , azonnal
elkezdenek ásni , nagy porfelhőt kavarva . Az is gyakorta előfordul , hogy
szorosan egymáshoz bújnak , felborzolják szőrüket , megnyújtják testüket ,
hogy minél nagyobbnak látszódjanak . Az előadásuk csúcspontján pedig az
egész csapat a levegőbe ugrik , közben pedig morog . A hangadás egyébként is
fontos a szurikáták kapcsolatában , az egyedek legalább tízféle jelzést
használnak a kolónián belül .
---
# Hungarian Abstractive Summarization BART model
For further models, scripts and details, see [our repository](https://github.com/nytud/neural-models) or [our demo site](https://juniper.nytud.hu/demo/nlp).
- BART base model (see Results Table - bold):
- Pretrained on Webcorpus 2.0
- Finetuned HI corpus (hvg.hu + index.hu)
- Segments: 559.162
## Limitations
- tokenized input text (tokenizer: [HuSpaCy](https://huggingface.co/huspacy))
- **max_source_length = 1024**
- max_target_length = 256
## Results
| Model | HI | NOL |
| ------------- | ------------- | ------------- |
| BART-base-512 | 30.18/13.86/22.92 | 46.48/32.40/39.45 |
| BART-base-1024| **31.86/14.59/23.79** | 47.01/32.91/39.97 |
## Citation
If you use this model, please cite the following paper:
```
@inproceedings {yang-bart,
title = {{BARTerezzünk! - Messze, messze, messze a világtól, - BART kísérleti modellek magyar nyelvre}},
booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year = {2022},
publisher = {Szegedi Tudományegyetem, Informatikai Intézet},
address = {Szeged, Magyarország},
author = {Yang, Zijian Győző},
pages = {15--29}
}
```
|
NYTK/summarization-nol-bart-hungarian
|
NYTK
| 2023-01-10T09:22:27Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"hu",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:04Z |
---
language:
- hu
tags:
- summarization
license: apache-2.0
metrics:
- rouge
widget:
- text: >-
A Tisza-parti város állatkertjében régóta tartanak szurikátákat ( Suricata
suricatta ) , de tavaly tavaszig nem sikerült szaporítani őket , annak
ellenére , hogy tágas ház és kifutó épült számukra - közölte Veprik Róbert
igazgató . 2010-ben alakult ki az új - három Amszterdamból származó
nőstényből és egy budapesti fiatal hímből álló - csapat , amely szaporodni
kezdett . 2011-ben három , idén pedig egy utóddal örvendeztették meg a
gondozókat és az állatbarátokat . A szurikáták utódai - tizenegy hetes
vemhesség után - október és március között vakon és szőrtelenül jönnek a
világra . A kicsinyek háromhetesen bújnak elő az üregből , és nevelésükben
mindkét szülő részt vesz . A szurikátacsapatokban a család tagjai nagyon
szoros kapcsolatban állnak egymással , viszont nagyon harciasan fellépnek az
idegenekkel szemben , akár meg is ölhetik azt az állatot , amelyet
betolakodónak tekintenek . Bár a Dél-Afrikában , a Kalahári sivatagban
őshonos cibetmacskaféle ragadozókat a szegedi állatkertben természetes
élőhelyükhöz képest kevesebb veszély fenyegeti , a vadasparki erdőben
ragadozó madarak is élnek , amelyek akár zsákmányként is tekinthetnének a
szurikátákra . A szegedi csapatnál azonban szigorú őrség van , mindig lesi
valaki két lábra állva a veszélyforrásokat . Az őrszemek figyelmét még a
sárkányrepülők is felkeltik , és felbukkanásakor valamennyi egyed biztos
helyre menekül . A szurikáták a Kalahári sivatag bozótos , sziklás
területein csapatokban élnek . A 700 gramm körüli testtömegű ragadozók
rovarokkal , lárvákkal , skorpiókkal táplálkoznak , de néha elfogyasztják a
kisebb gerinceseket , tojásokat és növényi gumókat is . A nappal aktív
állatok földalatti üregrendszert ásnak , amelynek több bejárata is van . Ha
a szurikáták idegen csapattal vagy ragadozóval kerülnek szembe , azonnal
elkezdenek ásni , nagy porfelhőt kavarva . Az is gyakorta előfordul , hogy
szorosan egymáshoz bújnak , felborzolják szőrüket , megnyújtják testüket ,
hogy minél nagyobbnak látszódjanak . Az előadásuk csúcspontján pedig az
egész csapat a levegőbe ugrik , közben pedig morog . A hangadás egyébként is
fontos a szurikáták kapcsolatában , az egyedek legalább tízféle jelzést
használnak a kolónián belül .
---
# Hungarian Abstractive Summarization BART model
For further models, scripts and details, see [our repository](https://github.com/nytud/neural-models) or [our demo site](https://juniper.nytud.hu/demo/nlp).
- BART base model (see Results Table - bold):
- Pretrained on Webcorpus 2.0
- Finetuned NOL corpus (nol.hu)
- Segments: 397,343
## Limitations
- tokenized input text (tokenizer: [HuSpaCy](https://huggingface.co/huspacy))
- max_source_length = 512
- max_target_length = 256
## Results
| Model | HI | NOL |
| ------------- | ------------- | ------------- |
| BART-base-512 | 30.18/13.86/22.92 | **46.48/32.40/39.45** |
| BART-base-1024| 31.86/14.59/23.79 | 47.01/32.91/39.97 |
## Citation
If you use this model, please cite the following paper:
```
@inproceedings {yang-bart,
title = {{BARTerezzünk! - Messze, messze, messze a világtól, - BART kísérleti modellek magyar nyelvre}},
booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year = {2022},
publisher = {Szegedi Tudományegyetem, Informatikai Intézet},
address = {Szeged, Magyarország},
author = {Yang, Zijian Győző},
pages = {15--29}
}
```
|
cwinkler/distilbert-base-uncased-finetuned-greenplastics
|
cwinkler
| 2023-01-10T09:21:10Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-09T07:22:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-greenplastics
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
- cwinkler/patents_green_plastics_10k
- .train_test_split(test_size=0.3)
# distilbert-base-uncased-finetuned-greenplastics
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0329
- Accuracy: 0.9922
- F1: 0.9922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2334 | 1.0 | 113 | 0.0384 | 0.9896 | 0.9896 |
| 0.0245 | 2.0 | 226 | 0.0329 | 0.9922 | 0.9922 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
cleanrl/SpaceInvaders-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1
|
cleanrl
| 2023-01-10T08:57:01Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"SpaceInvaders-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-10T08:56:57Z |
---
tags:
- SpaceInvaders-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvaders-v5
type: SpaceInvaders-v5
metrics:
- type: mean_reward
value: 31672.50 +/- 17575.25
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **SpaceInvaders-v5**
This is a trained model of a PPO agent playing SpaceInvaders-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppo_atari_envpool_async_jax_scan_impalanet_machado.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ppo_atari_envpool_async_jax_scan_impalanet_machado]"
python -m cleanrl_utils.enjoy --exp-name ppo_atari_envpool_async_jax_scan_impalanet_machado --env-id SpaceInvaders-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/SpaceInvaders-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/ppo_atari_envpool_async_jax_scan_impalanet_machado.py
curl -OL https://huggingface.co/cleanrl/SpaceInvaders-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/SpaceInvaders-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/poetry.lock
poetry install --all-extras
python ppo_atari_envpool_async_jax_scan_impalanet_machado.py --track --wandb-project-name envpool-atari --save-model --upload-model --hf-entity cleanrl --env-id SpaceInvaders-v5 --seed 1
```
# Hyperparameters
```python
{'anneal_lr': True,
'async_batch_size': 16,
'batch_size': 2048,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'SpaceInvaders-v5',
'exp_name': 'ppo_atari_envpool_async_jax_scan_impalanet_machado',
'gae': True,
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1024,
'norm_adv': True,
'num_envs': 64,
'num_minibatches': 2,
'num_steps': 32,
'num_updates': 24414,
'save_model': True,
'seed': 1,
'target_kl': None,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 2,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'envpool-atari'}
```
|
akum1343/results2
|
akum1343
| 2023-01-10T08:49:24Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-01-10T07:17:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: results2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 27 | 6.1310 | 11.5882 | 3.2614 | 10.0378 | 11.2317 | 17.2 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cpu
- Datasets 2.6.1
- Tokenizers 0.12.1
|
AhmedBou/TuniBert
|
AhmedBou
| 2023-01-10T08:12:26Z | 117 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"sentiment analysis",
"classification",
"arabic dialect",
"tunisian dialect",
"ar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
language:
- ar
tags:
- sentiment analysis
- classification
- arabic dialect
- tunisian dialect
---
This is a fineTued Bert model on Tunisian dialect text (Used dataset: AhmedBou/Tunisian-Dialect-Corpus), ready for sentiment analysis and classification tasks.
LABEL_1: Positive
LABEL_2: Negative
LABEL_0: Neutral
This work is an integral component of my Master's degree thesis and represents the culmination of extensive research and labor.
If you wish to utilize the Tunisian-Dialect-Corpus or the TuniBert model, kindly refer to the directory provided. [huggingface.co/AhmedBou][github.com/BoulahiaAhmed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.