modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-01 06:29:04
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 530
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-01 06:28:51
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
qgallouedec/ppo_lstm-Ant-v3-1368740319
|
qgallouedec
| 2023-02-28T12:34:53Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"Ant-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T12:34:30Z |
---
library_name: stable-baselines3
tags:
- Ant-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: RecurrentPPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Ant-v3
type: Ant-v3
metrics:
- type: mean_reward
value: 1084.75 +/- 203.82
name: mean_reward
verified: false
---
# **RecurrentPPO** Agent playing **Ant-v3**
This is a trained model of a **RecurrentPPO** agent playing **Ant-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo_lstm --env Ant-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo_lstm --env Ant-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo_lstm --env Ant-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo_lstm --env Ant-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo_lstm --env Ant-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo_lstm --env Ant-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('n_timesteps', 1000000.0),
('normalize', True),
('policy', 'MlpLstmPolicy'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
mm-ai/vit-cc-512-birads
|
mm-ai
| 2023-02-28T12:26:40Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:preprocessed1024_config",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-02-28T09:29:03Z |
---
tags:
- generated_from_trainer
datasets:
- preprocessed1024_config
metrics:
- accuracy
- f1
model-index:
- name: vit-cc-512-birads
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: preprocessed1024_config
type: preprocessed1024_config
args: default
metrics:
- name: Accuracy
type: accuracy
value:
accuracy: 0.4943467336683417
- name: F1
type: f1
value:
f1: 0.3929699341372617
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-cc-512-birads
This model is a fine-tuned version of [](https://huggingface.co/) on the preprocessed1024_config dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1133
- Accuracy: {'accuracy': 0.4943467336683417}
- F1: {'f1': 0.3929699341372617}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------------------------------:|:---------------------------:|
| 1.1037 | 1.0 | 796 | 1.0357 | {'accuracy': 0.4748743718592965} | {'f1': 0.21465076660988078} |
| 1.0588 | 2.0 | 1592 | 1.0446 | {'accuracy': 0.4623115577889447} | {'f1': 0.33094476503399495} |
| 1.0486 | 3.0 | 2388 | 1.0408 | {'accuracy': 0.47361809045226133} | {'f1': 0.3313643442345453} |
| 1.0288 | 4.0 | 3184 | 1.0186 | {'accuracy': 0.5050251256281407} | {'f1': 0.3404676010455165} |
| 1.0284 | 5.0 | 3980 | 1.0288 | {'accuracy': 0.5037688442211056} | {'f1': 0.3406391773730375} |
| 0.997 | 6.0 | 4776 | 1.0183 | {'accuracy': 0.5087939698492462} | {'f1': 0.3539488153998284} |
| 0.9682 | 7.0 | 5572 | 1.0965 | {'accuracy': 0.4566582914572864} | {'f1': 0.3695106771946128} |
| 0.9313 | 8.0 | 6368 | 1.0554 | {'accuracy': 0.4962311557788945} | {'f1': 0.38158088397057704} |
| 0.8938 | 9.0 | 7164 | 1.0930 | {'accuracy': 0.4943467336683417} | {'f1': 0.38196414933207573} |
| 0.8697 | 10.0 | 7960 | 1.1133 | {'accuracy': 0.4943467336683417} | {'f1': 0.3929699341372617} |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
sarthakc44/Reinforce-Pixelcopter-PLE-v0
|
sarthakc44
| 2023-02-28T12:00:21Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T12:00:15Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 34.35 +/- 44.21
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Alex48/a2c-AntBulletEnv-v0
|
Alex48
| 2023-02-28T12:00:06Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T00:29:36Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 2114.81 +/- 106.21
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tkoterwas/Reinforce-pixelcopter
|
tkoterwas
| 2023-02-28T11:47:11Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T11:45:37Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 14.90 +/- 13.45
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
johnowhitaker/pyramid_noise_test_500steps
|
johnowhitaker
| 2023-02-28T11:43:43Z | 31 | 2 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"multires_noise",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-02-28T10:57:25Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- multires_noise
inference: true
---
A model trained with Pyramid Noise - see https://wandb.ai/johnowhitaker/multires_noise/reports/Multi-Resolution-Noise-for-Diffusion-Model-Training--VmlldzozNjYyOTU2 for details
```python
from torch import nn
import random
def pyramid_noise_like(x, discount=0.9):
b, c, w, h = x.shape
u = nn.Upsample(size=(w, h), mode='bilinear')
noise = torch.randn_like(x)
for i in range(6):
r = random.random()*2+2 # Rather than always going 2x,
w, h = max(1, int(w/(r**i))), max(1, int(h/(r**i)))
noise += u(torch.randn(b, c, w, h).to(x)) * discount**i
if w==1 or h==1: break
return noise / noise.std() # Scale back to unit variance
```
To use the mode for inference, just load it like a normal stable diffusion pipeline:
```python
from diffusers import StableDiffusionPipeline
model_path = "pyramid_noise_test_500steps"
pipe = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=torch.float16)
pipe.to("cuda")
image = pipe(prompt="A black image").images[0]
image
```
|
AlexSh/Taxi-v3
|
AlexSh
| 2023-02-28T11:39:57Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T11:39:48Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="AlexSh/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ChechkovEugene/q-Taxi-v3
|
ChechkovEugene
| 2023-02-28T11:37:26Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T11:37:16Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ChechkovEugene/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
snipaid/gptj-title-teaser-10k
|
snipaid
| 2023-02-28T11:34:54Z | 0 | 1 | null |
[
"gptj",
"title generation",
"headline-generation",
"teaser generation",
"news",
"de",
"arxiv:2101.00027",
"arxiv:1910.09700",
"license:mit",
"region:us"
] | null | 2023-02-08T18:13:28Z |
---
license: mit
language:
- de
tags:
- gptj
- title generation
- headline-generation
- teaser generation
- news
inference: false
---
# GPT-J-Title-Teaser-10k
<!-- Provide a quick summary of what the model is/does. -->
gptj-title-teaser-10k
Version 1.0 / 22 December 2022
A fine-tuned version of the [GPT-J-6B-8bit](https://huggingface.co/hivemind/gpt-j-6B-8bit) model for generating titles and teasers for news.
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
Test generation capabilities here: https://snipaid.tech
A GPT-J model finetuned on german language news using a causal language modeling (CLM) objective.
GPT-J is a transformers model pretrained on a very large corpus of english data [The Pile](https://huggingface.co/datasets/the_pile) in a self-supervised fashion.
This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data)
with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
Inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right.
The model uses internally a mask-mechanism to make sure the predictions for the token i only uses the inputs from 1 to i but not the future tokens.
The pretrained model learns an inner representation of the english language that can then be used to extract features useful for downstream tasks.
The model is best at what it was pretrained for however, which is generating texts from a prompt.
A prompt is a piece of text inserted in the input examples, so that the original task can be formulated as a (masked) language modeling problem.
To fit the model to the domain of german news for the downstream task of title and teaser generation it was finetuned on a dataset with 10,000 german news articles in a multi-task finetuning fashion.
Hence the finetuned models name drives from the model it was finetuned from (gptj), the downstream generation tasks (title, teaser) and the size of the finetuning dataset (10k).
- **Developed by:** snipaid
- **Model type:** gptj
- **Language(s) (NLP):** de
- **License:** MIT
- **Finetuned from model:** [GPT-J-6B-8bit](https://huggingface.co/hivemind/gpt-j-6B-8bit)
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The model is intended for generating titles and teasers of news documents.
News document: A news story's fulltext in plain text.
Title: A few words that reflect the essence of the news story, also known as headline.
Teaser: A few sentences that spark curiousity about the "best of the rest" of the news story.
## Direct Use and how to get started with the model
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
The model is built on [GPT-J-6B-8bit](https://huggingface.co/hivemind/gpt-j-6B-8bit) to make the model usable and fine-tunable on a single GPU with ~11 GB memory.
Running it requires some utility code for the 8 bit quantization and loRa adapters.
Here's how to get started: [](https://colab.research.google.com/drive/1-FdkAL5RYaNRkaY3cFRc_TY5yv3Scxdo)
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Misuse:
* Generating and spreading misinformation
* Generating content that is discriminating, violent or otherwise harmful
Use cases the model will not work well for:
* Generating snippets other than title and teaser
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The base model GPT-J was trained on the Pile, a dataset scraped from many different websites.
This dataset is known contain profanity, lewd, and otherwise abrasive language alongside certain biases.
Fine-tuning does not eliminate those risks and biases. Depending upon input gptj-title-teaser-10k may produce socially unacceptable output.
To learn more about biases in the Pile see [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027).
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
When generating text with the model please keep in mind, that the statistically most likely next token or word often does not produce the most "accurate" text.
Never depend upon those models to produce factually accurate output!
We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to ensure the quality of the generared output.
For further information see [limitations and biases of GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B#limitations-and-biases).
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The model was finetuned on a collection of 10,000 news items scraped from different online news outlets* in german language.
\* *Namely: Speedweek, n-tv, Welt, Tagesspiegel, Faz, Merkur, Bild, Focus, Rp-Online, Freie Presse, Weser-Kurier, Tz, Stern, Kicker, Taz, Schwäbische Zeitung, Frankfurter Rundschau, Stuttgarter Zeitung, Abendzeitung, Donaukurier, Hessische Neidersächsiche Allgemeine, Kreiszeitung, Heise Online, Augsburger Allgemeine, SPOX, Nordbayern, Offenbach Post Online, inFranken, Westfälischer Anzeiger, Tagesschau, Nordkurier, Wallstreet online, Computer Bild, Die Rheinlandpfalz, Morgenweb, Bunte, Sport1, LR-Online, Gala, Wirtschaftswoche, Chip, Brigitte, NWZ Online.*
For each news item the dataset contains title, teaser and fulltext.
```
[
{
"title": ...,
"teaser": ...,
"fulltext": ...
},
]
```
The dataset contains news items within the categories of sports, politics, panorama, culture, technology, health, knowledge, cars, travel, economy and other in equal proportions.
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
The model was finetuned using a causal language modeling (CLM) objective for multitask finetuning.
### Preprocessing
For each news item, two inputs were concatenated like below.
```
f"[Text]: {item.fulltext} \n [Title]: {item.title}"
f"[Text]: {item.fulltext} \n [Teaser]: {item.teaser}"
```
This results in one input per task for each news item.
*Note: The inserted prompt "[Text]:" marks the beginning of the news item's fulltext.
In the same manner "[Title]:" prompts the news item's title and "[Teaser]:" the news item's teaser.*
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** A100 SXM4
- **Hours used:** 27h 42min
- **Cloud Provider:** Vast.ai
- **Compute Region:** Unknown
- **Carbon Emitted:** ~4.79kg co2e
# Glossary
**News Document**, plain text form of a news article or news item.
**News Item**, aka news article. A particular piece of news, usually from a journalistic source.
**Snippet**, a small section of text that is related to a news document.
**Title** aka headline. A few words that reflect the essence of the news story.
**Teaser** aka lede. A few sentences that spark curiosity about the "best of the rest" of the news story.
|
Art-phys/Reinforce-Pixelcopter-PLE-v0
|
Art-phys
| 2023-02-28T11:15:11Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T07:46:42Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 61.00 +/- 40.97
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
akanametov/ppo-LunarLander-v2
|
akanametov
| 2023-02-28T11:12:46Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T11:12:20Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 288.78 +/- 16.06
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
brouthen/GDG_QAADA_Discover_Reinforcement_Learning_II
|
brouthen
| 2023-02-28T11:12:29Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T11:12:23Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: GDG_QAADA_Discover_Reinforcement_Learning_II
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="brouthen/GDG_QAADA_Discover_Reinforcement_Learning_II", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
eldraco/rl_course_vizdoom_health_gathering_supreme
|
eldraco
| 2023-02-28T11:11:09Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T11:11:05Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.35 +/- 5.06
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r eldraco/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
johnowhitaker/pyramid_noise_test_5000steps
|
johnowhitaker
| 2023-02-28T11:10:03Z | 29 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-02-28T08:50:24Z |
I forgot to re-scale my pyramid noise for this one, so the variance of the noise seen during training was more like variance 3 than variance 1. This means sampled images tend to come out super soft and blurry. That said, to run:
```python
from diffusers import StableDiffusionPipeline
import torch, random
from torch import nn
def pyramid_noise_like(x, discount=0.9):
b, c, w, h = x.shape
u = nn.Upsample(size=(w, h), mode='bilinear')
noise = torch.randn_like(x)
for i in range(10):
r = random.random()*2+2 # Rather than always going 2x,
w, h = max(1, int(w/(r**i))), max(1, int(h/(r**i)))
noise += u(torch.randn(b, c, w, h).to(x)) * discount**i
if w==1 or h==1: break
return noise # Note no scaling
pipe = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=torch.float16)
pipe.to("cuda");
latents = torch.randn(1, 4, 64, 64).cuda().half()
latents = pyramid_noise_like(latents)
image = pipe(prompt="A candle in a dark room", latents=latents).images[0]
image
```
|
taoist/ppo-LunarLander-v2
|
taoist
| 2023-02-28T11:05:34Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T11:00:40Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.34 +/- 16.43
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
abarekatain/Reinforce-CartPole
|
abarekatain
| 2023-02-28T10:58:18Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T10:58:05Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
swl-models/MoonTea-v2
|
swl-models
| 2023-02-28T10:48:26Z | 0 | 3 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-02-28T10:38:31Z |
---
license: creativeml-openrail-m
---
|
swl-models/Butter
|
swl-models
| 2023-02-28T10:30:48Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-02-28T10:17:29Z |
---
license: creativeml-openrail-m
---
|
swl-models/diamond
|
swl-models
| 2023-02-28T10:27:37Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-02-28T10:22:54Z |
---
license: creativeml-openrail-m
---
|
agcagc/ppo-LunarLander-v2
|
agcagc
| 2023-02-28T10:26:37Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T10:23:06Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.88 +/- 16.78
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
muhammadravi251001/fine-tuned-IndoNLI-Basic-with-indobert-large-p2
|
muhammadravi251001
| 2023-02-28T10:24:53Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-02-28T07:50:59Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fine-tuned-IndoNLI-Basic-with-indobert-large-p2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-IndoNLI-Basic-with-indobert-large-p2
This model is a fine-tuned version of [indobenchmark/indobert-large-p2](https://huggingface.co/indobenchmark/indobert-large-p2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6559
- Accuracy: 0.7392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8412 | 2.49 | 50 | 0.6559 | 0.7392 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
Lakshya2k/Taxi-v3-500x6
|
Lakshya2k
| 2023-02-28T10:17:34Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T09:54:55Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-500x6
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Lakshya2k/Taxi-v3-500x6", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sarthakc44/CartPole-v1
|
sarthakc44
| 2023-02-28T10:07:02Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T10:06:52Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
haidlir/HF-LunarLander-v2-PPO
|
haidlir
| 2023-02-28T10:03:52Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T09:56:39Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 276.65 +/- 17.76
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mafwalter/roberta-base-finetuned-question-v-statement
|
mafwalter
| 2023-02-28T09:55:25Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-02-28T08:23:40Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-finetuned-question-v-statement
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-question-v-statement
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0064
- Accuracy: 0.9993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0091 | 1.0 | 10576 | 0.0083 | 0.9988 |
| 0.0035 | 2.0 | 21152 | 0.0064 | 0.9993 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
swl-models/WhiteDistanceMix-v2.5
|
swl-models
| 2023-02-28T09:49:37Z | 0 | 2 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-02-28T09:28:25Z |
---
license: creativeml-openrail-m
---
|
BeardedJohn/bert-finetuned-ner-per-v9
|
BeardedJohn
| 2023-02-28T09:34:10Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-02-27T16:49:17Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bert-finetuned-ner-per-v9
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-per-v9
This model is a fine-tuned version of [BeardedJohn/bert-finetuned-ner-ubb-conll-endava-only-misc-v2](https://huggingface.co/BeardedJohn/bert-finetuned-ner-ubb-conll-endava-only-misc-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 128, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Ahmade/conversationv11
|
Ahmade
| 2023-02-28T09:30:51Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-02-28T09:21:15Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: conversationv11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# conversationv11
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 28
- eval_batch_size: 28
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
openthaigpt/openthaigpt-gpt2-instructgpt-poc-0.0.3
|
openthaigpt
| 2023-02-28T09:28:51Z | 33 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"OpenThaiGPT",
"0.0.3",
"th",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-02-28T09:20:41Z |
---
license: apache-2.0
language:
- th
pipeline_tag: text-generation
tags:
- OpenThaiGPT
- 0.0.3
---
OpenThaiGPT version 0.0.3
The Third PoC Model
* Pretraining Model: GPT-2 Thai-base
* InstructDataset: 300,000 Pantip + 5,000 Wiki QA => 7,000 Thai InstructGPT
* RLHF: None
* Developer: Kobkrit Viriyayudhakorn (kobkrit@iapp.co.th)
|
ChhayaKumarDas/dqn-SpaceInvadersNoFrameskip-v4
|
ChhayaKumarDas
| 2023-02-28T09:16:52Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T09:16:13Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 650.50 +/- 172.40
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ChhayaKumarDas -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ChhayaKumarDas -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ChhayaKumarDas
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
giobin/Reinforce-pixelcopetr-v4
|
giobin
| 2023-02-28T09:01:12Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T09:01:10Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopetr-v4
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 13.20 +/- 9.12
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
lmqg/mt5-small-koquad-qa
|
lmqg
| 2023-02-28T08:57:33Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question answering",
"ko",
"dataset:lmqg/qg_koquad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-02-28T01:28:10Z |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: ko
datasets:
- lmqg/qg_koquad
pipeline_tag: text2text-generation
tags:
- question answering
widget:
- text: "question: 매드 클라운이 참가해 큰 화제를 모았던 프로그램은?, context: 과거 소울 컴퍼니 소속으로 소울 컴퍼니 해체 후 현재의 소속사는 스타쉽 엑스이다. Mad Clown vs Crucial Star (매드 클라운 vs 크루셜 스타)라는 프로젝트 그룹으로 크루셜 스타와 함께 활동하기도 하였으며, 2013년부터는 MC인 저스디스와 팀을 이루어 랩 듀오 커먼콜드로 활동하고 있다. 또한 Mnet 《쇼미더머니 2》에서 참가자로 참가하여 큰 화제를 모았으며, 《쇼미더머니 5》에서는 길 & 매드 클라운 팀으로 프로듀서로 출연하였다., 재발매 물량도 완판되어 추가 제작에 들어갔다. 2016년 4월, 소속사와 자신의 SNS를 통해 2016년 5월 15일 현재 교제 중인 일반인 여자친구와의 결혼을 공식발표하였다."
example_title: "Question Answering Example 1"
- text: "question: 1913년 필라델피아 애슬레틱스의 개막전 상대는?, context: 1913년 시즌을 앞두고 스프링 트레이닝에서 잭 쿰스는 앨라배마 주 몽고메리에서 고열로 힘들어했는데, 당시에는 식중독 및 늑막염 진단을 받고 휴식을 취했다. 4월 10일, 보스턴 레드삭스를 상대로 치러진 개막전에서 잭 쿰스는 선발투수로 내정되었다. 그는 3이닝을 노히트로 막고 6회 치프 벤더와 교체되었으며, 경기는 10-5로 애슬레틱스가 승리했다. 이틀 뒤에 다시 선발 등판에 나섰으나 ⁄3이닝 동안 2피안타 1볼넷, 4실점만을 기록하고 강판되었다. 쿰스는 보스턴에서의 시리즈를 끝내고 팀 동료들과 함께 워싱턴으로 향했지만, 고통이 심해지자 구단은 그를 필라델피아로 돌려보냈다. 그곳에서 그는 장티푸스 진단을 받고 휴식을 취했으며, 8월에 다시 팀에 복귀하려고 했지만 정상적인 회복을 위해서 다시 병원에 들어갔다. 이 기간 몸무게가 25 kg 가량이나 감소했다. 이 해 필라델피아 애슬레틱스는 월드 시리즈에서 2년만에 다시 뉴욕 자이언츠와 맞붙었고, 우승을 차지했다. 쿰스의 공백기는 다음해인 1914년 시즌까지 길어졌다. 이 해 시즌에는 팀 순위가 정해진 시즌 막판에야 두 경기에 선발 출전해서, 도합 8이닝 8피안타 4실점, 4.50의 평균자책점을 기록했다. 시즌 후인 12월 9일, 애슬레틱스에서 방출되었다."
example_title: "Question Answering Example 2"
model-index:
- name: lmqg/mt5-small-koquad-qa
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_koquad
type: default
args: default
metrics:
- name: BLEU4 (Question Answering)
type: bleu4_question_answering
value: 32.74
- name: ROUGE-L (Question Answering)
type: rouge_l_question_answering
value: 73.14
- name: METEOR (Question Answering)
type: meteor_question_answering
value: 52.94
- name: BERTScore (Question Answering)
type: bertscore_question_answering
value: 96.54
- name: MoverScore (Question Answering)
type: moverscore_question_answering
value: 90.93
- name: AnswerF1Score (Question Answering)
type: answer_f1_score__question_answering
value: 77.1
- name: AnswerExactMatch (Question Answering)
type: answer_exact_match_question_answering
value: 70.59
---
# Model Card of `lmqg/mt5-small-koquad-qa`
This model is fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) for question answering task on the [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [google/mt5-small](https://huggingface.co/google/mt5-small)
- **Language:** ko
- **Training data:** [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="ko", model="lmqg/mt5-small-koquad-qa")
# model prediction
answers = model.answer_q(list_question="매드 클라운이 참가해 큰 화제를 모았던 프로그램은?", list_context=" 과거 소울 컴퍼니 소속으로 소울 컴퍼니 해체 후 현재의 소속사는 스타쉽 엑스이다. Mad Clown vs Crucial Star (매드 클라운 vs 크루셜 스타)라는 프로젝트 그룹으로 크루셜 스타와 함께 활동하기도 하였으며, 2013년부터는 MC인 저스디스와 팀을 이루어 랩 듀오 커먼콜드로 활동하고 있다. 또한 Mnet 《쇼미더머니 2》에서 참가자로 참가하여 큰 화제를 모았으며, 《쇼미더머니 5》에서는 길 & 매드 클라운 팀으로 프로듀서로 출연하였다., 재발매 물량도 완판되어 추가 제작에 들어갔다. 2016년 4월, 소속사와 자신의 SNS를 통해 2016년 5월 15일 현재 교제 중인 일반인 여자친구와의 결혼을 공식발표하였다.")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/mt5-small-koquad-qa")
output = pipe("question: 매드 클라운이 참가해 큰 화제를 모았던 프로그램은?, context: 과거 소울 컴퍼니 소속으로 소울 컴퍼니 해체 후 현재의 소속사는 스타쉽 엑스이다. Mad Clown vs Crucial Star (매드 클라운 vs 크루셜 스타)라는 프로젝트 그룹으로 크루셜 스타와 함께 활동하기도 하였으며, 2013년부터는 MC인 저스디스와 팀을 이루어 랩 듀오 커먼콜드로 활동하고 있다. 또한 Mnet 《쇼미더머니 2》에서 참가자로 참가하여 큰 화제를 모았으며, 《쇼미더머니 5》에서는 길 & 매드 클라운 팀으로 프로듀서로 출연하였다., 재발매 물량도 완판되어 추가 제작에 들어갔다. 2016년 4월, 소속사와 자신의 SNS를 통해 2016년 5월 15일 현재 교제 중인 일반인 여자친구와의 결혼을 공식발표하였다.")
```
## Evaluation
- ***Metric (Question Answering)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-koquad-qa/raw/main/eval/metric.first.answer.paragraph_question.answer.lmqg_qg_koquad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
| AnswerExactMatch | 70.59 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| AnswerF1Score | 77.1 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| BERTScore | 96.54 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| Bleu_1 | 66.01 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| Bleu_2 | 57.02 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| Bleu_3 | 46.02 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| Bleu_4 | 32.74 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| METEOR | 52.94 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| MoverScore | 90.93 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| ROUGE_L | 73.14 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_koquad
- dataset_name: default
- input_types: ['paragraph_question']
- output_types: ['answer']
- prefix_types: None
- model: google/mt5-small
- max_length: 512
- max_length_output: 32
- epoch: 15
- batch: 16
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-small-koquad-qa/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
swl-models/ProllyMix
|
swl-models
| 2023-02-28T08:41:31Z | 24 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-02-28T08:41:31Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
duplicated_from: Printemps/ProllyMix
---
|
zalqarnain/my_awesome_asr_mind_model
|
zalqarnain
| 2023-02-28T08:38:08Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-02-28T06:36:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_asr_mind_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_asr_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
ksaml/distilbert-base-uncased-finetuned-imdb
|
ksaml
| 2023-02-28T08:36:12Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-02-27T22:21:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5571 | 1.0 | 157 | 2.4451 |
| 2.5286 | 2.0 | 314 | 2.4485 |
| 2.5385 | 3.0 | 471 | 2.4560 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
|
gritsys/my_awesome_eli5_clm-model
|
gritsys
| 2023-02-28T08:18:27Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-02-28T06:57:58Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: gritsys/my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gritsys/my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.3399
- Validation Loss: 5.5886
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 6.7702 | 6.4295 | 0 |
| 6.3075 | 6.2404 | 1 |
| 6.1358 | 6.1114 | 2 |
| 6.0137 | 6.0240 | 3 |
| 5.9162 | 5.9632 | 4 |
| 5.8324 | 5.8999 | 5 |
| 5.7573 | 5.8411 | 6 |
| 5.6913 | 5.7984 | 7 |
| 5.6306 | 5.7603 | 8 |
| 5.5742 | 5.7290 | 9 |
| 5.5219 | 5.6919 | 10 |
| 5.4724 | 5.6651 | 11 |
| 5.4264 | 5.6356 | 12 |
| 5.3815 | 5.6159 | 13 |
| 5.3399 | 5.5886 | 14 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.10.0
- Tokenizers 0.13.2
|
faezehprb/git-base-pokemon
|
faezehprb
| 2023-02-28T08:18:14Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"git",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-02-27T14:37:23Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: git-base-pokemon
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# git-base-pokemon
This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 5.6166
- Wer Score: 21.8701
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Score |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| 7.6442 | 1.06 | 50 | 5.6166 | 21.8701 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
SarvasvaK/LunarLander
|
SarvasvaK
| 2023-02-28T07:55:03Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T07:54:35Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.69 +/- 16.76
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mishall/q-FrozenLake-v1-4x4-noSlippery
|
mishall
| 2023-02-28T07:52:49Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T07:52:47Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mishall/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jinukoo/q-Taxi-v3
|
jinukoo
| 2023-02-28T07:52:24Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T07:52:21Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jinukoo/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AIARTCHAN/xtracolor.v11
|
AIARTCHAN
| 2023-02-28T07:28:58Z | 483 | 10 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"aiartchan",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-02-28T07:03:21Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- aiartchan
---
# xtracolor.v11
[원본글](https://arca.live/b/aiart/70564359)
[civitai](https://civitai.com/models/12622/xtracolorv11)
# Download
- [original 7.7GB](https://civitai.com/api/download/models/14883)
- [safetensors 4.41GB](https://huggingface.co/AIARTCHAN/xtracolor.v11/resolve/main/xtracolor.v11-no-ema.safetensors)
- [safetensors fp16 2.13GB](https://huggingface.co/AIARTCHAN/xtracolor.v11/resolve/main/xtracolor.v11-fp16.safetensors)
추천 설정
프롬: (beautiful detailed glow:1.0~1.1), Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 9, Size: 512x768 or 768x512, upscaler: latent, Denoising strength: 0.57, Clip skip: 2
파워컬러v1, 니지저니 돚거 로라, 오랜지2, dalcefo 모델 병합한 결과물




|
Johnnyboiiii/Doge
|
Johnnyboiiii
| 2023-02-28T07:06:51Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-02-28T07:06:40Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Doge
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7142857313156128
---
# Doge
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Doge

#### shiba inu

|
gstaff/ppo-LunarLander-v2
|
gstaff
| 2023-02-28T07:03:22Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2022-12-08T06:49:51Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 114.28 +/- 65.83
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 1000000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 512
'anneal_lr': True
'gae': True
'gamma': 0.999
'gae_lambda': 0.98
'num_minibatches': 64
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'gstaff/ppo-LunarLander-v2'
'batch_size': 2048
'minibatch_size': 32}
```
|
LarryAIDraw/nakiriErinaFoodWars_v2
|
LarryAIDraw
| 2023-02-28T07:02:11Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-02-28T06:56:57Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/13582/nakiri-erina-or-food-wars
|
muhammadravi251001/fine-tuned-IndoNLI-Augmented-with-indobert-large-p2
|
muhammadravi251001
| 2023-02-28T07:02:03Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-02-28T06:57:21Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fine-tuned-IndoNLI-Augmented-with-indobert-large-p2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-IndoNLI-Augmented-with-indobert-large-p2
This model is a fine-tuned version of [indobenchmark/indobert-large-p2](https://huggingface.co/indobenchmark/indobert-large-p2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9576
- Accuracy: 0.48
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3288 | 1.0 | 1 | 0.9576 | 0.48 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
owen198/esgbert
|
owen198
| 2023-02-28T06:58:58Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-02-23T12:23:38Z |
---
license: apache-2.0
language:
- en
metrics:
- accuracy
---
This is an example of machine reading articles with BERT. The main function is to determine whether the input sentence is related to Carbon Emissions (LABLE_1), Community Relations (LABEL_0), or Random Sentences (LABEL_2). The following are example sentence patterns for the two classes:
Community Relations (LABEL_0)
* We strive to source products in a responsible manner while working with suppliers to improve their social and environmental practices.
* Ethical sourcing has been a key area of focus for the Wesfarmers Group for almost a decade.
* Our businesses directly source products from nearly 28,000 suppliers in more than 40 countries. Some of the major locations we source from include Australia, Bangladesh, China, India and Indonesia.
Carbon Emissions (LABLE_1)
* The reduction of CO2 emissions was 19.3 percent in Japan from the level of the fiscal year ended March 31, 2015.
* The emissions were 6.6 percent increased overseas.
* As a result, the emissions were 1.2% increased globally.
Random Sentences (LABEL_2)
* I do wish there were new interviews with Anthony Hopkins and Ann Margret though.
* Alas, the answer is telegraphed too soon so whatever suspense director Richard Attenborough is trying to muster is drained away.
* The dummy is seemingly real at all times, especially when it is killing some hapless victim.
|
johnowhitaker/sd-class-wikiart-from-bedrooms
|
johnowhitaker
| 2023-02-28T06:18:58Z | 181 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-12-06T10:09:53Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model initialized from https://huggingface.co/google/ddpm-bedroom-256 and trained for 5000 steps on https://huggingface.co/datasets/huggan/wikiart.
Script: https://github.com/huggingface/diffusion-models-class/blob/main/unit2/finetune_model.py
Training Logs (with example images): https://wandb.ai/johnowhitaker/dm_finetune/runs/2upaa341
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('johnowhitaker/sd-class-wikiart-from-bedrooms')
image = pipeline().images[0]
image
```
|
vs393031/Hindi-optical-character-recognition
|
vs393031
| 2023-02-28T06:16:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-02-28T06:03:19Z |
# Hindi-Character-Recognition
### The app is live on huggingface spaces! Try it -> [Gradio-HCR](https://huggingface.co/spaces/abhiswain/Gradio-Hindi-Character-Recognition)
## Running the app locally:
You can run the streamlit and gradio app locally.
1. Install the requirements: `pip install -r requirements.txt`
2. Now, just do `streamlit run app.py` or `gradio gradio_app.py`
### Streamlit demo

### Gradio demo

## Training the model (Optional)
1. Install the requirements: `pip install -r requirements.txt`
2. Hindi Character Recognition
Getting the data:
- Download the data from [here](https://www.kaggle.com/datasets/suvooo/hindi-character-recognition)
- Unzip it. You need to split the data into 4 different directories, since we are training for Hindi digits & letters separately.

How to run ?
- You can create your custom model in the `model.py` file or can go with the `HNet` already present. For custom models created, you need to import them to `train.py`, for them to to use. Remember we are training different models for Hindi Digit & Characters.
- Now to train the model with default params do, `python train.py`. You can also specify epochs and lr. Most important, is the `model_type`
- To train do, `python train.py --epochs <num-epochs> --lr <learning-rate> --model_type <type-of-model>`
|
jayeshvpatil/dqn-SpaceInvadersNoFrameskip-v4
|
jayeshvpatil
| 2023-02-28T05:53:48Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-24T13:47:36Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 669.00 +/- 237.40
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jayeshvpatil -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jayeshvpatil -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jayeshvpatil
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
ElementBrawlerAI/ppo-Huggy
|
ElementBrawlerAI
| 2023-02-28T05:48:49Z | 23 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-02-28T05:48:41Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: ElementBrawlerAI/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
letingliu/my_awesome_model2
|
letingliu
| 2023-02-28T05:40:31Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-02-28T01:41:51Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: letingliu/my_awesome_model2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# letingliu/my_awesome_model2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6510
- Validation Loss: 0.6219
- Train Accuracy: 0.6635
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 30, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6876 | 0.6618 | 0.5577 | 0 |
| 0.6510 | 0.6219 | 0.6635 | 1 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.10.0
- Tokenizers 0.13.2
|
letingliu/my_awesome_model3
|
letingliu
| 2023-02-28T05:23:25Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-02-28T05:15:09Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: letingliu/my_awesome_model3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# letingliu/my_awesome_model3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5057
- Validation Loss: 0.4900
- Train Accuracy: 0.9245
- Epoch: 18
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 30, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6850 | 0.6546 | 0.7170 | 0 |
| 0.6449 | 0.6103 | 0.7547 | 1 |
| 0.5989 | 0.5549 | 0.8774 | 2 |
| 0.5506 | 0.5088 | 0.9151 | 3 |
| 0.5059 | 0.4900 | 0.9245 | 4 |
| 0.4885 | 0.4900 | 0.9245 | 5 |
| 0.4939 | 0.4900 | 0.9245 | 6 |
| 0.4969 | 0.4900 | 0.9245 | 7 |
| 0.4993 | 0.4900 | 0.9245 | 8 |
| 0.4951 | 0.4900 | 0.9245 | 9 |
| 0.5035 | 0.4900 | 0.9245 | 10 |
| 0.5064 | 0.4900 | 0.9245 | 11 |
| 0.5022 | 0.4900 | 0.9245 | 12 |
| 0.5111 | 0.4900 | 0.9245 | 13 |
| 0.5057 | 0.4900 | 0.9245 | 14 |
| 0.4979 | 0.4900 | 0.9245 | 15 |
| 0.5110 | 0.4900 | 0.9245 | 16 |
| 0.5080 | 0.4900 | 0.9245 | 17 |
| 0.5057 | 0.4900 | 0.9245 | 18 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.10.0
- Tokenizers 0.13.2
|
dukky/ppo-LunarLander-v2
|
dukky
| 2023-02-28T05:09:27Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T05:09:02Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.17 +/- 17.33
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
marcoyang/icefall-asr-librispeech-lstm-transducer3-2023-02-28
|
marcoyang
| 2023-02-28T04:44:55Z | 0 | 0 | null |
[
"license:artistic-2.0",
"region:us"
] | null | 2023-02-28T04:28:24Z |
---
license: artistic-2.0
---
This is an LSTM transducer model trained on `tal_csasr`. It supports ASR of Chinese mixed with English.
For more details, please refer to https://github.com/k2-fsa/icefall/pull/904.
|
vicclab/FolkGPT
|
vicclab
| 2023-02-28T04:42:57Z | 8 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"en",
"dataset:vicclab/fairy_tales",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-02-26T10:43:03Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: FolkGPT
results: []
datasets:
- vicclab/fairy_tales
language:
- en
pipeline_tag: text-generation
---
# FolkGPT
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on vicclab/fairy_tales dataset.
## Model description
This model is the result of fine-tuning gpt2 on a dataset of fairy tales from various cultures.
## Intended uses & limitations
The idea behind this is to generate text in the fashion of fairy tales written in the 18th and 19th centuries.
Why? Fairy tales seemed an appropriate application for text generation, as stories are usually short(ish),
self-contained, and easy to read.
## Training and evaluation data
Trained on the vicclab/fairy_tales dataset. The dataset consists of a number of texts which
were downloaded from Project Gutenberg, and then edited to remove all text except for the
stories themselves. These were then all concatenated into a text file and pushed to HF at
https://huggingface.co/datasets/vicclab/fairy_tales. The latest update to the dataset, which
was used in the training of this model, was created and uploaded on February 26th, 2023.
Texts used [and token count after removing boilerplate text]:
https://www.gutenberg.org/files/2591/2591-0.txt [102927 tokens]
https://www.gutenberg.org/files/503/503-0.txt [138353 tokens]
https://www.gutenberg.org/cache/epub/69739/pg69739.txt [51035 tokens]
https://www.gutenberg.org/files/2435/2435-0.txt [98791 tokens]
https://www.gutenberg.org/cache/epub/7871/pg7871.txt [49410 tokens]
https://www.gutenberg.org/files/8933/8933-0.txt [178622 tokens]
gutenberg.org/cache/epub/30834/pg30834.txt [58359 tokens]
https://www.gutenberg.org/cache/epub/68589/pg68589.txt [39815 tokens]
https://www.gutenberg.org/cache/epub/34453/pg34453.txt [69365 tokens]
gutenberg.org/cache/epub/8653/pg8653.txt [35351]
[Total tokens in actual dataset: 1002654 tokens]
## Training procedure
The dataset was loaded, sampling by paragraph. From here, the dataset was split into a training dataset
and a validation dataset in an 80-20 split. These were then tokenized. The model was set up, and the trainer
was instantiated with the training_arguments listed below. Then, the training took place.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
huggingtweets/aaronsaitama-mannythehitman-saitamaguru1
|
huggingtweets
| 2023-02-28T04:39:21Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-02-28T04:36:14Z |
---
language: en
thumbnail: http://www.huggingtweets.com/aaronsaitama-mannythehitman-saitamaguru1/1677559156742/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1534019152230457346/OOaOK49i_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1572387070366158848/ezXfaaRf_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1614038101097070595/uZNz3CRU_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mkay Saitama & Russell Armand & Aaron🐺Saitama</div>
<div style="text-align: center; font-size: 14px;">@aaronsaitama-mannythehitman-saitamaguru1</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Mkay Saitama & Russell Armand & Aaron🐺Saitama.
| Data | Mkay Saitama | Russell Armand | Aaron🐺Saitama |
| --- | --- | --- | --- |
| Tweets downloaded | 3191 | 3120 | 3202 |
| Retweets | 1980 | 1490 | 2861 |
| Short tweets | 191 | 127 | 45 |
| Tweets kept | 1020 | 1503 | 296 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/kfxbp7t1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @aaronsaitama-mannythehitman-saitamaguru1's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3nsboaoz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3nsboaoz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/aaronsaitama-mannythehitman-saitamaguru1')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
kejian/cpsc-debug
|
kejian
| 2023-02-28T04:31:24Z | 0 | 0 | null |
[
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"region:us"
] | null | 2023-02-27T08:21:37Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: kejian/cpsc-debug
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kejian/cpsc-debug
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 42724
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>',
'drop_token_fraction': 0.03,
'fine_prefix': '<|fine|>',
'misaligned_prefix': '<|misaligned|>',
'substandard_prefix': '<|substandard|>',
'threshold1': 0.0006038,
'threshold2': 0.0006638,
'threshold3': 0.00089704,
'threshold4': 0.9992},
'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [21362],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'bad_words_ids': [[50257],
[50258],
[50259],
[50260]],
'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 512,
'prefix': '<|aligned|>'},
{'generate_kwargs': {'bad_words_ids': [[50257],
[50258],
[50259],
[50260]],
'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 512,
'prefix': '<|aligned|>',
'prompt_before_control': True,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [21362],
'gpt3_kwargs': {'model_name': 'davinci'},
'max_tokens': 64,
'num_samples': 256,
'prefix': '<|aligned|>',
'should_insert_prefix': True},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'num_additional_tokens': 4,
'path_or_name': 'gpt2'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2',
'special_tokens': ['<|aligned|>',
'<|fine|>',
'<|substandard|>',
'<|misaligned|>']},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'kejian/cpsc-debug',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 500,
'num_tokens': 2800000000.0,
'output_dir': 'training_output_2',
'per_device_train_batch_size': 8,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 21362,
'save_strategy': 'no',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/1gr4oro2
|
vittai23/whisper-small-marathi
|
vittai23
| 2023-02-28T04:18:17Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"mar",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-02-27T11:02:36Z |
---
language:
- mar
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: whisper_marathi_ft
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: mr
split: test
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 112.60395848107794
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper_marathi_ft
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0656
- Wer: 112.6040
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.8216 | 0.02 | 5 | 1.4723 | 96.7700 |
| 1.2512 | 0.04 | 10 | 1.0656 | 112.6040 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
theblackcat102/deberta-v2-xxlarge-rm
|
theblackcat102
| 2023-02-28T03:27:20Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"en",
"dataset:openai/webgpt_comparisons",
"dataset:openai/summarize_from_feedback",
"dataset:Anthropic/hh-rlhf",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-02-25T06:20:29Z |
---
license: mit
datasets:
- openai/webgpt_comparisons
- openai/summarize_from_feedback
- Anthropic/hh-rlhf
language:
- en
---
# Reward model on deberta-v2-xxlarge (1.5B)
Reward model used in RLHF which is trained on webgpt, summarize from human feedback and Open Assistant user ranked dataset
# Model Details
## Model Description
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [Open Assistant](https://github.com/LAION-AI/Open-Assistant)
- **Paper :** [Instruct GPT](https://cdn.openai.com/papers/Training_language_models_to_follow_instructions_with_human_feedback.pdf) : We try to replicate as close as we can on our hardware and existing datasets
- **Demo [optional]:** [More Information Needed]
# Uses
This model was trained with human feedback comparison examples, which penalize bad or rude sentence with lower scores.
## Direct Use
```
model_name = 'theblackcat102/deberta-v2-xxlarge-rm'
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "I just got out of prison, any suggestion?"
good_helpful = "I am sorry to hear about it, it must be a hard time inside"
bad_text = "Stay away from me, you scumbag convict"
pos = tokenizer(prompt, good_helpful, return_tensors='pt')
neg = tokenizer(prompt, bad_text, return_tensors='pt')
pos_score = model(**pos).logits[0]
neg_score = model(**neg).logits[0]
print(pos_score, neg_score)
>> tensor([-1.3449], grad_fn=<SelectBackward0>) tensor([-2.0942], grad_fn=<SelectBackward0>)
```
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
How to use it as a rank function
```python
def divide_chunks(l, n):
# looping till length l
for i in range(0, len(l), n):
yield l[i:i + n]
@torch.no_grad()
def rank_model_fn(samples, **kwargs):
output_scores = []
for chunk_samples in divide_chunks(samples, 16):
is_empty = []
prefixes, postfixes = [], []
for sample in chunk_samples:
prefix, postfix = sample.split('[SEP]')
postfix = postfix.strip()
if len(postfix) == 0 or len(set(postfix)) <= 3:
is_empty.append(True)
else:
is_empty.append(False)
postfixes.append(postfix)
prefixes.append(prefix)
is_empty = np.array(is_empty)
inputs = rank_tokenizer(prefixes, postfixes, return_tensors="pt", padding=True)
inputs.pop("token_type_ids", None)
inputs = { key: tensor.cuda() for key, tensor in inputs.items() }
scores = rank_model(**inputs).logits[:, 0].detach().cpu()
scores[is_empty] = -4
output_scores += [ s for s in scores ]
return torch.from_numpy(np.array(output_scores))
```
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
# Training Details
## Training Procedure
checkout our training repo [here](https://github.com/LAION-AI/Open-Assistant/tree/main/model/reward/instructor)
### Preprocessing [optional]
[More Information Needed]
### Training Hyperparameters
```yaml
model_name: microsoft/deberta-v2-xxlarge
learning_rate: 2e-6
scheduler: cosine
gradient_checkpointing: false
gradient_accumulation_steps: 12
per_device_train_batch_size: 1
per_device_eval_batch_size: 4
warmup_steps: 600
eval_steps: 1000000
save_steps: 1000
max_length: 512
num_train_epochs: 2
datasets:
- webgpt
- hfsummary
- anthropic_rlhf
- oa_private
```
### Speeds, Sizes, Times [optional]
Trained on 8 A100 80G model, since we are using the same batch strategy as InstructGPT, using a batch_size of 1 actually equals to (N-1) batch where N refers to number of negative examples. Which is why I recommend using the largest VRAM GPU you can find to train this model.
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed]
|
mm-ai/vit-model
|
mm-ai
| 2023-02-28T03:18:12Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:preprocessed1024_config",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-02-20T16:30:46Z |
---
tags:
- generated_from_trainer
datasets:
- preprocessed1024_config
metrics:
- accuracy
- f1
model-index:
- name: vit-model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: preprocessed1024_config
type: preprocessed1024_config
args: default
metrics:
- name: Accuracy
type: accuracy
value:
accuracy: 0.6011306532663316
- name: F1
type: f1
value:
f1: 0.5956396413406886
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-model
This model is a fine-tuned version of [](https://huggingface.co/) on the preprocessed1024_config dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1353
- Accuracy: {'accuracy': 0.6011306532663316}
- F1: {'f1': 0.5956396413406886}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------:|:---------------------------:|
| 1.224 | 1.0 | 796 | 0.9884 | {'accuracy': 0.5276381909547738} | {'f1': 0.40344173017767304} |
| 0.96 | 2.0 | 1592 | 0.9255 | {'accuracy': 0.5621859296482412} | {'f1': 0.5134011716404221} |
| 0.8878 | 3.0 | 2388 | 0.9308 | {'accuracy': 0.574748743718593} | {'f1': 0.46867195041352344} |
| 0.809 | 4.0 | 3184 | 0.8904 | {'accuracy': 0.6067839195979899} | {'f1': 0.5799288651427482} |
| 0.7541 | 5.0 | 3980 | 0.8936 | {'accuracy': 0.5954773869346733} | {'f1': 0.5938876317530138} |
| 0.6904 | 6.0 | 4776 | 0.8760 | {'accuracy': 0.6118090452261307} | {'f1': 0.6023012293668115} |
| 0.6195 | 7.0 | 5572 | 1.0032 | {'accuracy': 0.5917085427135679} | {'f1': 0.5834559014249068} |
| 0.5766 | 8.0 | 6368 | 1.0268 | {'accuracy': 0.6023869346733668} | {'f1': 0.5779800559497847} |
| 0.4963 | 9.0 | 7164 | 1.0460 | {'accuracy': 0.5992462311557789} | {'f1': 0.5875334711293277} |
| 0.4323 | 10.0 | 7960 | 1.1353 | {'accuracy': 0.6011306532663316} | {'f1': 0.5956396413406886} |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
SEVUNX/RADDFSN
|
SEVUNX
| 2023-02-28T02:37:01Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-02-28T01:54:28Z |
---
license: creativeml-openrail-m
---
|
dcolish/coreml-openjourney-v2
|
dcolish
| 2023-02-28T02:25:02Z | 0 | 3 | null |
[
"coreml",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-02-26T21:21:36Z |
---
license: creativeml-openrail-m
---
|
ebony0wl/bert-finetuned-squad
|
ebony0wl
| 2023-02-28T02:17:26Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-02-18T06:49:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
kongacute/optuna-ppo-LunarLander-v2
|
kongacute
| 2023-02-28T01:59:43Z | 9 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-27T07:34:35Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -313.16 +/- 58.73
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Bbrown44/hiphop-ds
|
Bbrown44
| 2023-02-28T01:51:54Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-02-22T00:26:07Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: hiphop-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hiphop-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu116
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ParastooC/t5-small-finetuned-xsum
|
ParastooC
| 2023-02-28T01:39:49Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"summarization",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-02-14T01:18:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-SA
results: []
pipeline_tag: summarization
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2847
- Rouge1: 0.1422
- Rouge2: 0.0403
- Rougel: 0.1337
- Rougelsum: 0.1342
- Gen Len: 8.4248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.7269 | 1.0 | 527 | 1.5826 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.5708 | 2.0 | 1054 | 1.4112 | 0.035 | 0.0105 | 0.0357 | 0.0349 | 1.7168 |
| 1.4796 | 3.0 | 1581 | 1.3644 | 0.1012 | 0.0167 | 0.0948 | 0.0942 | 8.2212 |
| 1.3451 | 4.0 | 2108 | 1.3399 | 0.126 | 0.0205 | 0.1183 | 0.1182 | 9.0088 |
| 1.3491 | 5.0 | 2635 | 1.3247 | 0.1307 | 0.0266 | 0.1232 | 0.1236 | 8.0088 |
| 1.3109 | 6.0 | 3162 | 1.3112 | 0.1428 | 0.0325 | 0.1332 | 0.1334 | 7.6549 |
| 1.2462 | 7.0 | 3689 | 1.3046 | 0.1435 | 0.0319 | 0.1342 | 0.1349 | 7.885 |
| 1.2353 | 8.0 | 4216 | 1.2937 | 0.1404 | 0.0313 | 0.1297 | 0.1303 | 9.1239 |
| 1.2838 | 9.0 | 4743 | 1.2903 | 0.1434 | 0.0372 | 0.1338 | 0.1344 | 8.1062 |
| 1.2317 | 10.0 | 5270 | 1.2870 | 0.1459 | 0.0421 | 0.1388 | 0.1389 | 8.4248 |
| 1.2598 | 11.0 | 5797 | 1.2857 | 0.1421 | 0.0403 | 0.1346 | 0.1351 | 8.2389 |
| 1.1579 | 12.0 | 6324 | 1.2847 | 0.1422 | 0.0403 | 0.1337 | 0.1342 | 8.4248 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
BigMikeyLol/chomixNiPruned
|
BigMikeyLol
| 2023-02-28T01:36:04Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-02-26T12:29:16Z |
---
license: creativeml-openrail-m
---
Civitai: https://civitai.com/models/6424/chilloutmix
(Dreamlike Diffusion 1.0)(https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0)
Author note from Civitai:
I solemnly declare: In principle, this model is prohibited from being used for training style models based on portraits of
celebrities and public figures, because it will cause controversy and have a negative impact on the development of the AI community.
If you must violate the above statement to train the relevant model and release it publicly, please delete all descriptions related to
this model in your release notes. Thank you for your support and understanding.
|
macb/dqn-SpaceInvadersNoFrameskip-v4
|
macb
| 2023-02-28T01:31:34Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-27T03:21:14Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 1009.50 +/- 383.92
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga macb -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga macb -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga macb
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
pyf98/slurp_entity_e_branchformer
|
pyf98
| 2023-02-28T01:24:04Z | 2 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:slurp_entity",
"arxiv:2210.00077",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2023-02-28T01:09:55Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- slurp_entity
license: cc-by-4.0
---
## ESPnet2 ASR model
### `pyf98/slurp_entity_e_branchformer`
This model was trained by Yifan Peng using slurp_entity recipe in [espnet](https://github.com/espnet/espnet/).
References:
- [E-Branchformer: Branchformer with Enhanced merging for speech recognition (SLT 2022)](https://arxiv.org/abs/2210.00077)
- [Branchformer: Parallel MLP-Attention Architectures to Capture Local and Global Context for Speech Recognition and Understanding (ICML 2022)](https://proceedings.mlr.press/v162/peng22a.html)
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 4bbd29a40cc7e2259996d30c0c76d3d789c1153d
pip install -e .
cd egs2/slurp_entity/asr1
./run.sh --skip_data_prep false --skip_train true --download_model pyf98/slurp_entity_e_branchformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon Feb 27 19:14:30 CST 2023`
- python version: `3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0]`
- espnet version: `espnet 202301`
- pytorch version: `pytorch 1.13.1`
- Git hash: `4bbd29a40cc7e2259996d30c0c76d3d789c1153d`
- Commit date: `Sat Feb 25 21:54:03 2023 -0600`
## exp/asr_train_asr_e_branchformer_e12_mlp3072_linear1024_layerdrop_raw_en_word
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave_10best/devel|8690|178058|84.6|7.6|7.8|3.2|18.6|51.2|
|decode_asr_asr_model_valid.acc.ave_10best/test|13078|262176|83.7|7.7|8.6|3.0|19.3|49.7|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave_10best/devel|8690|847400|90.8|3.0|6.2|3.5|12.7|51.2|
|decode_asr_asr_model_valid.acc.ave_10best/test|13078|1245475|89.7|3.1|7.2|3.4|13.6|49.7|
### Intent Classification
- Valid Intent Classification Result:
0.8781357882623706
- Test Intent Classification Result:
0.8743691695977979
### Entity
|Slu f1|Precision|Recall|F-Measure|
|:---:|:---:|:---:|:---:|
| test | 0.7940 | 0.7582 | 0.7757 |
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_e_branchformer_e12_mlp3072_linear1024_layerdrop.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_e_branchformer_e12_mlp3072_linear1024_layerdrop_raw_en_word
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 60
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 64
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_word/train/speech_shape
- exp/asr_stats_raw_en_word/train/text_shape.word
valid_shape_file:
- exp/asr_stats_raw_en_word/valid/speech_shape
- exp/asr_stats_raw_en_word/valid/text_shape.word
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/wav.scp
- speech
- kaldi_ark
- - dump/raw/train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/devel/wav.scp
- speech
- kaldi_ark
- - dump/raw/devel/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.001
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 35000
token_list:
- <blank>
- <unk>
- ▁SEP
- ▁FILL
- s
- ▁the
- a
- ▁to
- ▁i
- ▁me
- e
- ▁s
- ▁a
- i
- ▁you
- ▁what
- er
- ing
- u
- ▁is
- ''''
- o
- p
- ▁in
- ▁p
- y
- ▁my
- ▁please
- d
- c
- m
- ▁b
- l
- ▁m
- ▁c
- st
- date
- n
- ▁d
- le
- b
- ▁for
- re
- t
- ▁on
- en
- h
- 'on'
- ar
- person
- ▁re
- ▁f
- ▁g
- ▁of
- an
- ▁
- g
- ▁today
- ▁t
- or
- ▁it
- ▁this
- ▁h
- r
- f
- at
- ch
- ce
- place_name
- ▁email
- ▁do
- es
- ri
- ▁e
- ▁w
- ic
- in
- ▁that
- event_name
- ▁play
- ▁and
- al
- ▁n
- ▁can
- email_query
- ve
- ▁new
- day
- it
- ate
- ▁from
- ▁have
- k
- time
- ▁am
- media_type
- email_sendemail
- ent
- ▁olly
- qa_factoid
- se
- v
- et
- ck
- ▁any
- calendar_set
- ly
- th
- ▁how
- ▁meeting
- ed
- ▁tell
- ▁st
- x
- ur
- ro
- ▁at
- nd
- ▁list
- w
- ▁u
- ou
- ▁not
- ▁about
- ▁an
- ▁o
- general_negate
- ut
- ▁time
- ▁be
- ▁ch
- ▁are
- social_post
- business_name
- la
- ty
- play_music
- ot
- general_quirky
- ▁l
- ▁sh
- ▁tweet
- om
- ▁week
- um
- ▁one
- ter
- ▁he
- ▁up
- ▁com
- general_praise
- weather_query
- ▁next
- ▁th
- ▁check
- calendar_query
- ▁last
- ▁ro
- ad
- is
- ▁with
- ay
- ▁send
- pe
- ▁pm
- ▁tomorrow
- ▁j
- un
- ▁train
- general_explain
- ▁v
- one
- ▁r
- ra
- news_query
- ation
- ▁emails
- us
- if
- ct
- ▁co
- ▁add
- ▁will
- ▁se
- nt
- ▁was
- ine
- ▁de
- ▁set
- ▁ex
- ▁would
- ir
- ow
- ber
- general_repeat
- ight
- ook
- ▁again
- ▁song
- currency_name
- ll
- ▁ha
- ▁go
- relation
- te
- ion
- and
- ▁y
- ▁ye
- general_affirm
- general_confirm
- ery
- ▁po
- ff
- ▁we
- ▁turn
- ▁did
- ▁mar
- ▁alarm
- ▁like
- datetime_query
- ers
- ▁all
- ▁remind
- ▁so
- qa_definition
- ▁calendar
- end
- ▁said
- ci
- ▁off
- ▁john
- ▁day
- ss
- pla
- ume
- ▁get
- ail
- pp
- z
- ry
- am
- ▁need
- as
- ▁thank
- ▁wh
- ▁want
- ▁right
- ▁jo
- ▁facebook
- ▁k
- ge
- ld
- ▁fri
- ▁two
- general_dontcare
- ▁news
- ol
- oo
- ant
- ▁five
- ▁event
- ake
- definition_word
- transport_type
- ▁your
- vi
- orn
- op
- ▁weather
- ome
- ▁app
- ▁lo
- de
- ▁music
- weather_descriptor
- ak
- ke
- ▁there
- ▁si
- ▁lights
- ▁now
- ▁mo
- calendar_remove
- our
- ▁dollar
- food_type
- me
- ▁more
- ▁no
- ▁birthday
- orrect
- ▁rep
- ▁show
- play_radio
- ▁mon
- ▁does
- ood
- ag
- li
- ▁sto
- ▁contact
- cket
- email_querycontact
- ▁ev
- ▁could
- ange
- ▁just
- out
- ame
- .
- ▁ja
- ▁confirm
- qa_currency
- ▁man
- ▁late
- ▁think
- ▁some
- timeofday
- ▁bo
- qa_stock
- ong
- ▁start
- ▁work
- ▁ten
- int
- ▁command
- all
- ▁make
- ▁la
- j
- ▁answ
- ▁hour
- ▁cle
- ah
- ▁find
- ▁service
- ▁fa
- qu
- general_commandstop
- ai
- ▁when
- ▁te
- ▁by
- social_query
- ard
- ▁tw
- ul
- id
- ▁seven
- ▁where
- ▁much
- art
- ▁appointment
- ver
- artist_name
- el
- device_type
- ▁know
- ▁three
- ▁events
- ▁tr
- ▁li
- ork
- red
- ect
- ▁let
- ▁respon
- ▁par
- zz
- ▁give
- ▁twenty
- ▁ti
- ▁curre
- play_podcasts
- ▁radio
- cooking_recipe
- transport_query
- ▁con
- gh
- ▁le
- lists_query
- ▁rem
- recommendation_events
- house_place
- alarm_set
- play_audiobook
- ist
- ase
- music_genre
- ive
- ast
- player_setting
- ort
- lly
- news_topic
- list_name
- ▁playlist
- ▁ne
- business_type
- personal_info
- ind
- ust
- di
- ress
- recommendation_locations
- lists_createoradd
- iot_hue_lightoff
- lists_remove
- ord
- ▁light
- ere
- alarm_query
- audio_volume_mute
- music_query
- ▁audio
- rain
- ▁date
- ▁order
- audio_volume_up
- ▁ar
- ▁podcast
- transport_ticket
- mail
- iot_hue_lightchange
- iot_coffee
- radio_name
- ill
- ▁ri
- '@'
- takeaway_query
- song_name
- takeaway_order
- ▁ra
- email_addcontact
- play_game
- book
- transport_traffic
- ▁house
- music_likeness
- her
- transport_taxi
- iot_hue_lightdim
- ment
- ght
- fo
- order_type
- color_type
- '1'
- ven
- ould
- general_joke
- ess
- ain
- qa_maths
- ▁place
- ▁twe
- cast
- iot_cleaning
- ▁che
- ▁cont
- ith
- audiobook_name
- email_address
- game_name
- ▁cal
- general_frequency
- ▁tom
- ▁food
- act
- iot_hue_lightup
- '2'
- alarm_remove
- podcast_descriptor
- ▁definition
- audio_volume_down
- ▁media
- email_folder
- dia
- meal_type
- ▁mus
- recommendation_movies
- ▁ad
- ree
- pt
- now
- playlist_name
- ▁person
- change_amount
- ▁pla
- escri
- datetime_convert
- podcast_name
- ▁ab
- time_zone
- ▁def
- ting
- iot_wemo_on
- music_settings
- iot_wemo_off
- orre
- cy
- ank
- music_descriptor
- lar
- app_name
- row
- joke_type
- xt
- of
- ition
- ▁meet
- ink
- ▁confir
- transport_agency
- general_greet
- ▁business
- ▁art
- ▁ag
- urn
- escript
- rom
- ▁rel
- ▁au
- ▁currency
- audio_volume_other
- iot_hue_lighton
- ▁artist
- '?'
- ▁bus
- cooking_type
- movie_name
- coffee_type
- ingredient
- ather
- music_dislikeness
- sp
- q
- ▁ser
- esc
- ▁bir
- ▁cur
- name
- ▁tran
- ▁hou
- ek
- uch
- ▁conf
- ▁face
- '9'
- ▁birth
- I
- sw
- transport_descriptor
- ▁comm
- lease
- transport_name
- aid
- movie_type
- ▁device
- alarm_type
- audiobook_author
- '5'
- drink_type
- ▁joh
- ▁defin
- word
- ▁curren
- order
- iness
- W
- cooking_query
- sport_type
- ▁relation
- oint
- H
- '8'
- A
- '0'
- ▁dol
- vice
- ▁pers
- '&'
- T
- ▁appoint
- _
- '7'
- '3'
- '-'
- game_type
- ▁pod
- N
- M
- E
- list
- music_album
- dio
- ▁transport
- qa_query
- C
- O
- U
- query_detail
- ']'
- '['
- descriptor
- ':'
- spon
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf: null
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
aux_ctc_tasks: []
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
preencoder: null
preencoder_conf: {}
encoder: e_branchformer
encoder_conf:
output_size: 512
attention_heads: 8
attention_layer_type: rel_selfattn
pos_enc_layer_type: rel_pos
rel_pos_type: latest
cgmlp_linear_units: 3072
cgmlp_conv_kernel: 31
use_linear_after_conv: false
gate_activation: identity
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
layer_drop_rate: 0.1
linear_units: 1024
positionwise_layer_type: linear
macaron_ffn: true
use_ffn: true
merge_conv_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
layer_drop_rate: 0.2
preprocessor: default
preprocessor_conf: {}
required:
- output_dir
- token_list
version: '202301'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Dabe/LunarLanderPPO2
|
Dabe
| 2023-02-28T01:14:36Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T01:03:49Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 241.85 +/- 48.02
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**, trained for 1e6 time steps, obtaining:
**mean_reward** = 241.85 +/- 48.02
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
```python
import gym
from stable_baselines3 import PPO # Modelo que vamos a usar
from stable_baselines3.common.evaluation import evaluate_policy # Evaluación de los resultados del modelo entrenado
from stable_baselines3.common.env_util import make_vec_env
# Creo el env
env = gym.make('LunarLander-v2')
# Selecciono el modelo, en este caso el PPO, y lo ponemos a entrenar
model = PPO('MlpPolicy',env,verbose=1).learn(total_timesteps=1000000,progress_bar=True)
# Lo guardamos
model.save('Lunar_Lander')
# Creamos un nuevo env en el que probamos el modelo (valdría el mismo pero reseteado)
eval_env = gym.make('LunarLander-v2')
# Evaluamos el modelo
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
# Print the results
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
```
|
bsenker/swords-attentive_t5_v1
|
bsenker
| 2023-02-28T00:58:34Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain",
"summarization",
"en",
"dataset:bsenker/autotrain-data-swords",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-02-28T00:19:44Z |
---
tags:
- autotrain
- summarization
language:
- en
widget:
- text: he says this word when he is excited
datasets:
- bsenker/autotrain-data-swords
co2_eq_emissions:
emissions: 0.025105486031472425
pipeline_tag: summarization
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 37880100395
- CO2 Emissions (in grams): 0.0251
## Validation Metrics
- Loss: 1.557
- Rouge1: 62.140
- Rouge2: 13.128
- RougeL: 61.331
- RougeLsum: 60.728
- Gen Len: 3.989
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/bsenker/autotrain-swords-37880100395
```
|
dcduplooy/SpaceInvadersNoFrameskip-v4
|
dcduplooy
| 2023-02-28T00:47:48Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T00:47:04Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 701.50 +/- 291.41
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dcduplooy -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dcduplooy -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga dcduplooy
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
RaedS/REINFORCE-cartpole-v1
|
RaedS
| 2023-02-28T00:43:51Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T00:43:38Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: REINFORCE-cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
JS47/BanglaT5SummaryGenerator
|
JS47
| 2023-02-28T00:36:12Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-02-22T07:11:08Z |
---
tags:
- generated_from_trainer
model-index:
- name: BanglaT5SummaryGenerator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BanglaT5SummaryGenerator
This model is a fine-tuned version of [csebuetnlp/banglat5](https://huggingface.co/csebuetnlp/banglat5) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
adam1brownell/u4_cartpole
|
adam1brownell
| 2023-02-28T00:28:19Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T00:28:12Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: u4_cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
adam1brownell/cartpole
|
adam1brownell
| 2023-02-28T00:28:05Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T00:27:57Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Dabe/LunarLanderPPO
|
Dabe
| 2023-02-28T00:09:42Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-27T23:47:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -71.24 +/- 99.95
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
```python
import gym
from stable_baselines3 import PPO # Modelo que vamos a usar
from stable_baselines3.common.evaluation import evaluate_policy # Evaluación de los resultados del modelo entrenado
from stable_baselines3.common.env_util import make_vec_env
# Creo el env
env = gym.make('LunarLander-v2')
# Selecciono el modelo, en este caso el PPO, y lo ponemos a entrenar
model = PPO('MlpPolicy',env,verbose=1).learn(total_timesteps=200000,progress_bar=True)
# Lo guardamos
model.save('Lunar_Lander')
# Creamos un nuevo env en el que probamos el modelo (valdría el mismo pero reseteado)
eval_env = gym.make('LunarLander-v2')
# Evaluamos el modelo
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
# Print the results
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
```
|
RaedS/q-Taxi-v3
|
RaedS
| 2023-02-27T23:55:44Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-27T23:55:40Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="RaedS/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
RaedS/q-FrozenLake-v1-4x4-noSlippery
|
RaedS
| 2023-02-27T23:53:41Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-27T23:52:51Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="RaedS/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
agercas/dqn-MountainCar-v0
|
agercas
| 2023-02-27T23:34:23Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-27T23:15:30Z |
---
library_name: stable-baselines3
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
metrics:
- type: mean_reward
value: -116.60 +/- 27.47
name: mean_reward
verified: false
---
# **DQN** Agent playing **MountainCar-v0**
This is a trained model of a **DQN** agent playing **MountainCar-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Ranjth/sentence-transformers-multi_qa_MiniLM_L6_cos_v1_pa_trained
|
Ranjth
| 2023-02-27T23:21:21Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"doi:10.57967/hf/0415",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-02-27T23:17:06Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 4 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Marcoalves20/sd-class-butterflies-32
|
Marcoalves20
| 2023-02-27T23:14:29Z | 33 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-02-27T23:14:12Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Marcoalves20/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
keerthan2/swin-tiny-patch4-window7-224-finetuned-eurosat
|
keerthan2
| 2023-02-27T23:11:49Z | 35 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-02-27T22:51:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9785185185185186
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0575
- Accuracy: 0.9785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2258 | 1.0 | 190 | 0.1188 | 0.9596 |
| 0.1613 | 2.0 | 380 | 0.0786 | 0.9711 |
| 0.1636 | 3.0 | 570 | 0.0575 | 0.9785 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Artachtron/SoccerTwos
|
Artachtron
| 2023-02-27T22:44:04Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-02-27T22:35:44Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Artachtron/SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
nanogames/logh-rein
|
nanogames
| 2023-02-27T22:31:39Z | 30 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-02-27T22:26:27Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### logh_rein Dreambooth model trained by nanogames with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
G-e-o-r-g-e/Reinforce-Pixelcopter-PLE-v0
|
G-e-o-r-g-e
| 2023-02-27T22:24:13Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-27T22:24:06Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 20.80 +/- 15.64
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Ojeda01/bert_base_cased_MultiClass_v2
|
Ojeda01
| 2023-02-27T22:11:30Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-02-27T18:38:08Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_base_cased_MultiClass_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_cased_MultiClass_v2
This model is a fine-tuned version of [HMEXBI/bert_base_cased_MultiClass](https://huggingface.co/HMEXBI/bert_base_cased_MultiClass) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9806
- Accuracy: 0.8101
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1396 | 1.0 | 545 | 0.9023 | 0.7615 |
| 0.6961 | 2.0 | 1090 | 0.8074 | 0.7798 |
| 0.492 | 3.0 | 1635 | 0.8216 | 0.8009 |
| 0.3032 | 4.0 | 2180 | 0.9264 | 0.8018 |
| 0.1898 | 5.0 | 2725 | 0.9806 | 0.8101 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
hg2001/autotrain-male-vs-femalee-37851100302
|
hg2001
| 2023-02-27T22:04:39Z | 37 | 0 |
transformers
|
[
"transformers",
"pytorch",
"swin",
"image-classification",
"autotrain",
"vision",
"dataset:hg2001/autotrain-data-male-vs-femalee",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-02-27T22:03:50Z |
---
tags:
- autotrain
- vision
- image-classification
datasets:
- hg2001/autotrain-data-male-vs-femalee
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.0034341761338042994
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 37851100302
- CO2 Emissions (in grams): 0.0034
## Validation Metrics
- Loss: 0.060
- Accuracy: 0.979
- Precision: 0.960
- Recall: 1.000
- AUC: 1.000
- F1: 0.980
|
agercas/dqn-FrozenLake-v1
|
agercas
| 2023-02-27T22:01:36Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"FrozenLake-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-27T20:51:51Z |
---
library_name: stable-baselines3
tags:
- FrozenLake-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1
type: FrozenLake-v1
metrics:
- type: mean_reward
value: 0.90 +/- 0.30
name: mean_reward
verified: false
---
# **DQN** Agent playing **FrozenLake-v1**
This is a trained model of a **DQN** agent playing **FrozenLake-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
JYC333/rl_course_vizdoom_health_gathering_supreme
|
JYC333
| 2023-02-27T21:56:53Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-27T21:56:23Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.75 +/- 5.67
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r JYC333/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.8.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.8.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
EnD-Diffusers/Gambit_and_Rogue
|
EnD-Diffusers
| 2023-02-27T21:52:08Z | 11 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"comics",
"x-men",
"illlustration",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-02-27T03:45:27Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- comics
- x-men
- illlustration
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
---
[](https://huggingface.co/spaces/Duskfallcrew/Gambit_and_Rogue)
### Gambit and Rogue Dreambooth model trained by Duskfallcrew with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
# If you want to donate towards costs and don't want to subscribe:
https://ko-fi.com/DUSKFALLcrew
# Future Model Updates and Merges:
https://civitai.com/user/duskfallcrew
# Token: comidusk
# Sample Images Are available here
More samples in here: https://huggingface.co/Duskfallcrew/cajun-and-belle/tree/main/Cajun%20Belle%20outputs
I didn't keep the settings i normally do, so text files weren't included sadly.



|
Maisman/Etna-from-disgaea
|
Maisman
| 2023-02-27T21:27:53Z | 0 | 4 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-02-27T19:19:34Z |
---
license: creativeml-openrail-m
---
This is Etna from Disgaea!
How to trigger:
ETNA,
HIGH QUALITY,
BAT WINGS,
WINGS,
FLAT CHEST,
SHIELD,
TWINTAILS,
SWORD,
RED HAIR,
FROM SIDE,
FROM BEHIND,
GOOD HANDS,
ASS,
LYING,
SITTING
For hentai use:
nude, pussy
Enjoy :)
---
Download: [Etna](https://huggingface.co/Maisman/Etna-from-disgaea/blob/main/EtnaDisgaeaLora.safetensors)
Civitai: [Etna on Civitai](https://civitai.com/models/13914/etna-from-disgaea)
---
Example Prompts:
<p align="center"><img src="https://huggingface.co/Maisman/Etna-from-disgaea/resolve/main/02046-33282909-((masterpiece%2C%20best%20quality_1.2))%2C%20(ultra-detailed_1.2)%2C%20(8k)%2C%20highres%2C%20%2C%20etna%20(disgaea)%2C%20red%20hair%2C%20bat%20wings%2C%20wings%2C%20(tail)%2C%20sm.png">
Prompt:
```
((masterpiece, best quality:1.2)), (ultra-detailed:1.2), (8k), highres, <lora:EtnaDisgaeaLora:0.8>, etna (disgaea), red hair, bat wings, wings, (tail), small breasts, twintails, (good hands, 5 fingers), twintails, (high quality), etna \(disgaea\), blurry background, cinematic background, night background, stars in background,
```
Negative:
```
(nsfw, nude), (bad anatomy), extra limbs, lowres, destorced, (worst quality:1.4), (mouth open), (low quality:1.4), (trembling:1.4), (cropped head:1.4), (blurry), destorced hands,(disfigured), (bad hands, bad fingers, 1 finger, 2 fingers, 3 fingers, 6 fingers), watermark, wide hips, extra legs, bad legs, (By bad artist -neg), easynegative, blurry foreground, NG_DeepNegative_V1_75T, multiple views, portrait, collage, projected inset, portrait, plain background, simple background, lowres, signature, watermark, username, caption, (ugly eyes, deformed iris, deformed pupils, fused lips and teeth:1.2), text, cropped, logo, (worst quality, low quality, jpeg artifacts:1.2), (3D, 3D game, game render:1.2), ugly, duplicate, morbid, mutilated, extra fingers, mutated hands and fingers, twisted fingers, twisted hand, poorly drawn hands, malformed hands, poorly drawn face, mutation, deformed, blurry, penis, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, malformed limbs, missing arms, missing legs, missing limbs, extra arms, extra legs, extra hands, extra arms, three hands, three arms, distorted arms, malformed arms, bad body, bad legs, fused fingers, too many fingers, missing fingers, long neck, obese, fat, out of frame, censored, censor_bar, censor bar, animal_ears, animal ears, cat ears, dog ears, elf ears, large breasts, huge breasts, fused limbs, (yuri, yaoi, futa, futa_with_female:1.2), artist_name, genderswap, speech_bubble, wings, demon_girl, (scat, poop:1.4), large areolas, puffy nipples,
```
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 5.5, Seed: 33282909, Size: 512x768, Model: abyssorangemix2_Hardcore, Denoising strength: 0.7, Hires upscale: 2, Hires steps: 40, Hires upscaler: Latent (bicubic antialiased)
---
<p align="center"><img src="https://huggingface.co/Maisman/Etna-from-disgaea/resolve/main/02068-57624936-((masterpiece%2C%20best%20quality_1.2))%2C%20(ultra-detailed_1.2)%2C%20(8k)%2C%20highres%2C%20%2C%20etna%20(disgaea)%2C%20red%20hair%2C%20bat%20wings%2C%20wings%2C%20(tail)%2C%20sm.png">
Prompt:
```
((masterpiece, best quality:1.2)), (ultra-detailed:1.2), (8k), highres, <lora:EtnaDisgaeaLora:0.6>, etna (disgaea), red hair, bat wings, wings, (tail), small breasts, twintails, (good hands, 5 fingers), twintails, (high quality), etna \(disgaea\), blurry background, cinematic background, night background, stars in background, (hands behind back), (loli, )
```
Negative:
```
(nsfw, nude), (bad anatomy), extra limbs, lowres, destorced, (worst quality:1.4), (mouth open), (low quality:1.4), (trembling:1.4), (cropped head:1.4), (blurry), destorced hands,(disfigured), (bad hands, bad fingers, 1 finger, 2 fingers, 3 fingers, 6 fingers), watermark, wide hips, extra legs, bad legs, (By bad artist -neg), easynegative, blurry foreground, NG_DeepNegative_V1_75T, multiple views, portrait, collage, projected inset, portrait, plain background, simple background, lowres, signature, watermark, username, caption, (ugly eyes, deformed iris, deformed pupils, fused lips and teeth:1.2), text, cropped, logo, (worst quality, low quality, jpeg artifacts:1.2), (3D, 3D game, game render:1.2), ugly, duplicate, morbid, mutilated, extra fingers, mutated hands and fingers, twisted fingers, twisted hand, poorly drawn hands, malformed hands, poorly drawn face, mutation, deformed, blurry, penis, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, malformed limbs, missing arms, missing legs, missing limbs, extra arms, extra legs, extra hands, extra arms, three hands, three arms, distorted arms, malformed arms, bad body, bad legs, fused fingers, too many fingers, missing fingers, long neck, obese, fat, out of frame, censored, censor_bar, censor bar, animal_ears, animal ears, cat ears, dog ears, elf ears, large breasts, huge breasts, fused limbs, (yuri, yaoi, futa, futa_with_female:1.2), artist_name, genderswap, speech_bubble, wings, demon_girl, (scat, poop:1.4), large areolas, puffy nipples,
```
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 8, Seed: 57624936, Size: 512x768, Model: abyssorangemix2_Hardcore
|
lucadiliello/deberta-small
|
lucadiliello
| 2023-02-27T21:26:23Z | 34 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta",
"fill-mask",
"en",
"dataset:c4",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-02-27T14:34:20Z |
---
datasets:
- c4
language:
- en
metrics:
- accuracy
pipeline_tag: fill-mask
---
A small version of `DeBERTa` trained on the clean version of google C4 dataset. For more info about the size of the model, see `config.json`.
The model has been trained for **100K** steps with a batch size of **2048** and a sequence length of **512**, for a total of **104B** tokens.
The vocabulary and the tokenizer are the same as `microsoft/deberta-base`.
|
augustocsc/ppo-LunarLander-v2
|
augustocsc
| 2023-02-27T21:13:47Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-26T17:48:18Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 281.52 +/- 16.91
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
caioiglesias/a2c-AntBulletEnv-v0
|
caioiglesias
| 2023-02-27T21:13:03Z | 5 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-27T21:11:49Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1338.12 +/- 79.76
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
neulab/codebert-java
|
neulab
| 2023-02-27T20:55:40Z | 2,618 | 13 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"arxiv:2302.05527",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-09-26T14:10:02Z |
This is a `microsoft/codebert-base-mlm` model, trained for 1,000,000 steps (with `batch_size=32`) on **Java** code from the `codeparrot/github-code-clean` dataset, on the masked-language-modeling task.
It is intended to be used in CodeBERTScore: [https://github.com/neulab/code-bert-score](https://github.com/neulab/code-bert-score), but can be used for any other model or task.
For more information, see: [https://github.com/neulab/code-bert-score](https://github.com/neulab/code-bert-score)
## Citation
If you use this model for research, please cite:
```
@article{zhou2023codebertscore,
url = {https://arxiv.org/abs/2302.05527},
author = {Zhou, Shuyan and Alon, Uri and Agarwal, Sumit and Neubig, Graham},
title = {CodeBERTScore: Evaluating Code Generation with Pretrained Models of Code},
publisher = {arXiv},
year = {2023},
}
```
|
Dochee/xlm-roberta-base-finetuned-panx-de
|
Dochee
| 2023-02-27T20:49:25Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-02-27T19:52:39Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8637881274404392
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1569
- F1: 0.8638
## Model description
Multilingual Named Entity Recognition across several languages
For this project's token classification, I built a unique custom model head and the
WikiANN or PAN-X.2, which is a subset of the Cross-lingual TRansfer Evaluation of Multilingual
Encoders (XTREME) benchmark, was applied. This project was completed for a customer based
in switzerland, where the four languages that are most frequently spoken are
German (62.9% of articles), French (22.9%), Italian (8.4%), and English (5.9%).
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3044 | 1.0 | 525 | 0.1598 | 0.8174 |
| 0.1462 | 2.0 | 1050 | 0.1527 | 0.8308 |
| 0.1006 | 3.0 | 1575 | 0.1487 | 0.8459 |
| 0.0698 | 4.0 | 2100 | 0.1431 | 0.8615 |
| 0.0472 | 5.0 | 2625 | 0.1569 | 0.8638 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Manuel-I/distilgpt2-finetuned-shakespeare
|
Manuel-I
| 2023-02-27T20:37:22Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-02-27T20:11:38Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Manuel-I/distilgpt2-finetuned-shakespeare
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Manuel-I/distilgpt2-finetuned-shakespeare
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.2540
- Validation Loss: 3.4994
- Epoch: 16
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.2103 | 3.8318 | 0 |
| 3.9016 | 3.7030 | 1 |
| 3.7804 | 3.6243 | 2 |
| 3.7072 | 3.5896 | 3 |
| 3.6501 | 3.5631 | 4 |
| 3.6008 | 3.5464 | 5 |
| 3.5605 | 3.5322 | 6 |
| 3.5229 | 3.5230 | 7 |
| 3.4857 | 3.5189 | 8 |
| 3.4517 | 3.5118 | 9 |
| 3.4204 | 3.5034 | 10 |
| 3.3916 | 3.4978 | 11 |
| 3.3621 | 3.4949 | 12 |
| 3.3332 | 3.5003 | 13 |
| 3.3063 | 3.4998 | 14 |
| 3.2773 | 3.5015 | 15 |
| 3.2540 | 3.4994 | 16 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.10.0
- Tokenizers 0.13.2
|
mkuntz/Reinforce-Cartpole-v1
|
mkuntz
| 2023-02-27T20:29:36Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-27T20:28:58Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
andim/job_light_with_cardinalities
|
andim
| 2023-02-27T20:18:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-02-24T10:28:41Z |
This is a dataset containing the JOB-light workload along with the associated ground truth cardinality on the IMDB dataset for each query.
JOB-light is a workload derived from the Join Order Benchmark (JOB) containing 70 queries, which does not contain any predicates on strings nor disjunctions and limits to four joins at most.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.