modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 18:52:31
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 18:52:05
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
carolinetfls/plant-seedlings-model-beit-free-0-6
|
carolinetfls
| 2023-04-26T23:00:49Z | 19 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-04-26T20:11:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: plant-seedlings-model-beit-free-0-6
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7475442043222004
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plant-seedlings-model-beit-free-0-6
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7557
- Accuracy: 0.7475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.4892 | 0.2 | 100 | 2.4909 | 0.0751 |
| 2.4906 | 0.39 | 200 | 2.4886 | 0.0756 |
| 2.3925 | 0.59 | 300 | 2.3344 | 0.1537 |
| 2.31 | 0.79 | 400 | 2.3306 | 0.1464 |
| 2.2355 | 0.98 | 500 | 2.2335 | 0.1778 |
| 2.2642 | 1.18 | 600 | 2.1889 | 0.1807 |
| 2.0806 | 1.38 | 700 | 2.3229 | 0.1680 |
| 2.1013 | 1.57 | 800 | 2.1519 | 0.2004 |
| 2.0094 | 1.77 | 900 | 2.0611 | 0.2146 |
| 2.0387 | 1.96 | 1000 | 2.0413 | 0.2210 |
| 2.0032 | 2.16 | 1100 | 1.9758 | 0.2618 |
| 1.986 | 2.36 | 1200 | 1.9238 | 0.2638 |
| 2.0885 | 2.55 | 1300 | 1.8944 | 0.2942 |
| 1.8808 | 2.75 | 1400 | 1.9330 | 0.2868 |
| 1.915 | 2.95 | 1500 | 1.8919 | 0.2814 |
| 1.958 | 3.14 | 1600 | 1.8762 | 0.3114 |
| 1.9001 | 3.34 | 1700 | 1.8389 | 0.3232 |
| 1.8572 | 3.54 | 1800 | 1.7978 | 0.3487 |
| 1.9969 | 3.73 | 1900 | 1.9371 | 0.3089 |
| 1.9186 | 3.93 | 2000 | 1.8055 | 0.3502 |
| 1.7591 | 4.13 | 2100 | 1.7695 | 0.3428 |
| 1.8368 | 4.32 | 2200 | 1.7498 | 0.3502 |
| 1.9842 | 4.52 | 2300 | 1.8049 | 0.3193 |
| 1.7606 | 4.72 | 2400 | 1.6730 | 0.3954 |
| 1.7787 | 4.91 | 2500 | 1.7104 | 0.3777 |
| 1.6377 | 5.11 | 2600 | 1.6647 | 0.3870 |
| 1.8834 | 5.3 | 2700 | 1.6325 | 0.3973 |
| 1.6149 | 5.5 | 2800 | 1.6722 | 0.3787 |
| 1.7038 | 5.7 | 2900 | 1.6425 | 0.3973 |
| 1.682 | 5.89 | 3000 | 1.5927 | 0.4180 |
| 1.6326 | 6.09 | 3100 | 1.4982 | 0.4622 |
| 1.5687 | 6.29 | 3200 | 1.4440 | 0.4774 |
| 1.3637 | 6.48 | 3300 | 1.4477 | 0.4877 |
| 1.4079 | 6.68 | 3400 | 1.3827 | 0.5020 |
| 1.3721 | 6.88 | 3500 | 1.4069 | 0.5010 |
| 1.5675 | 7.07 | 3600 | 1.3595 | 0.5083 |
| 1.5725 | 7.27 | 3700 | 1.3790 | 0.4956 |
| 1.4522 | 7.47 | 3800 | 1.3116 | 0.5378 |
| 1.4692 | 7.66 | 3900 | 1.3729 | 0.4980 |
| 1.5073 | 7.86 | 4000 | 1.3799 | 0.5216 |
| 1.2529 | 8.06 | 4100 | 1.2706 | 0.5486 |
| 1.3727 | 8.25 | 4200 | 1.2519 | 0.5535 |
| 1.2451 | 8.45 | 4300 | 1.2595 | 0.5648 |
| 1.339 | 8.64 | 4400 | 1.3614 | 0.5172 |
| 1.2858 | 8.84 | 4500 | 1.3028 | 0.5393 |
| 1.1039 | 9.04 | 4600 | 1.2309 | 0.5771 |
| 1.0351 | 9.23 | 4700 | 1.2678 | 0.5609 |
| 1.1125 | 9.43 | 4800 | 1.2786 | 0.5624 |
| 1.1667 | 9.63 | 4900 | 1.2131 | 0.5840 |
| 1.1386 | 9.82 | 5000 | 1.1359 | 0.6154 |
| 1.1888 | 10.02 | 5100 | 1.1309 | 0.6041 |
| 1.1777 | 10.22 | 5200 | 1.1288 | 0.6287 |
| 1.3693 | 10.41 | 5300 | 1.3827 | 0.5182 |
| 1.1016 | 10.61 | 5400 | 1.2255 | 0.5594 |
| 1.1527 | 10.81 | 5500 | 1.0772 | 0.6434 |
| 1.1039 | 11.0 | 5600 | 1.1032 | 0.6100 |
| 1.2502 | 11.2 | 5700 | 1.1230 | 0.6169 |
| 1.0818 | 11.39 | 5800 | 1.0750 | 0.6302 |
| 1.0872 | 11.59 | 5900 | 1.0397 | 0.6331 |
| 1.0425 | 11.79 | 6000 | 1.0231 | 0.6483 |
| 1.0791 | 11.98 | 6100 | 1.0250 | 0.6636 |
| 0.9736 | 12.18 | 6200 | 1.0879 | 0.6267 |
| 0.9788 | 12.38 | 6300 | 1.1334 | 0.5968 |
| 0.8982 | 12.57 | 6400 | 0.9934 | 0.6528 |
| 1.077 | 12.77 | 6500 | 0.9698 | 0.6812 |
| 1.0347 | 12.97 | 6600 | 1.0265 | 0.6513 |
| 0.9159 | 13.16 | 6700 | 0.9442 | 0.6788 |
| 1.1187 | 13.36 | 6800 | 0.9738 | 0.6685 |
| 0.9624 | 13.56 | 6900 | 1.0008 | 0.6699 |
| 0.922 | 13.75 | 7000 | 0.9502 | 0.6906 |
| 0.9317 | 13.95 | 7100 | 0.9687 | 0.6758 |
| 0.9979 | 14.15 | 7200 | 0.9869 | 0.6768 |
| 0.8362 | 14.34 | 7300 | 0.9220 | 0.6994 |
| 0.8449 | 14.54 | 7400 | 0.9181 | 0.6861 |
| 0.9678 | 14.73 | 7500 | 0.9789 | 0.6729 |
| 0.9119 | 14.93 | 7600 | 0.8879 | 0.7009 |
| 0.9517 | 15.13 | 7700 | 0.8816 | 0.6994 |
| 0.9688 | 15.32 | 7800 | 0.8803 | 0.7117 |
| 0.8625 | 15.52 | 7900 | 0.8782 | 0.7038 |
| 0.9121 | 15.72 | 8000 | 0.8225 | 0.7191 |
| 0.9035 | 15.91 | 8100 | 0.8649 | 0.7087 |
| 0.8762 | 16.11 | 8200 | 0.8427 | 0.7102 |
| 0.7708 | 16.31 | 8300 | 0.8685 | 0.7117 |
| 0.8893 | 16.5 | 8400 | 0.8178 | 0.7264 |
| 0.9584 | 16.7 | 8500 | 0.8709 | 0.7092 |
| 0.757 | 16.9 | 8600 | 0.8244 | 0.7254 |
| 0.8184 | 17.09 | 8700 | 0.8128 | 0.7240 |
| 0.8858 | 17.29 | 8800 | 0.8360 | 0.7156 |
| 0.7116 | 17.49 | 8900 | 0.7952 | 0.7279 |
| 0.9579 | 17.68 | 9000 | 0.8263 | 0.7274 |
| 0.7037 | 17.88 | 9100 | 0.7884 | 0.7348 |
| 1.0359 | 18.07 | 9200 | 0.8118 | 0.7402 |
| 1.067 | 18.27 | 9300 | 0.8203 | 0.7186 |
| 0.8503 | 18.47 | 9400 | 0.7918 | 0.7362 |
| 0.8552 | 18.66 | 9500 | 0.7972 | 0.7382 |
| 0.7498 | 18.86 | 9600 | 0.8038 | 0.7343 |
| 0.8542 | 19.06 | 9700 | 0.7799 | 0.7333 |
| 0.9539 | 19.25 | 9800 | 0.7795 | 0.7333 |
| 0.7369 | 19.45 | 9900 | 0.8103 | 0.7269 |
| 0.6637 | 19.65 | 10000 | 0.7597 | 0.7441 |
| 0.6712 | 19.84 | 10100 | 0.7557 | 0.7475 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Sergendel/a2c-AntBulletEnv-v0
|
Sergendel
| 2023-04-26T22:48:01Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-26T18:09:29Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1226.61 +/- 481.06
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kerdel/GenerAd-AI
|
kerdel
| 2023-04-26T22:23:59Z | 36 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"text-generation",
"dataset:kerdel/generative_ai_sample",
"license:bigscience-openrail-m",
"region:us"
] |
text-generation
| 2023-04-26T22:18:12Z |
---
license: bigscience-openrail-m
datasets:
- kerdel/generative_ai_sample
library_name: adapter-transformers
pipeline_tag: text-generation
---
|
myklicious/Reinforce-CartPole-V1
|
myklicious
| 2023-04-26T21:49:52Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-26T21:49:40Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-V1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
FacehugmanIII/4x_foolhardy_Remacri
|
FacehugmanIII
| 2023-04-26T21:49:37Z | 0 | 90 | null |
[
"art",
"license:unknown",
"region:us"
] | null | 2023-04-26T21:39:16Z |
---
license: unknown
tags:
- art
---
Using the Remacri upscaler in Automatic1111:
Get the '4x_foolhardy_Remacri.pth' file linked in this post
Copy it to: \stable-diffusion-webui\models\ESRGAN
Restart WebUI
4x_foolhardy_Remacri is now available in the Extras tab and for the SD Upscale script
I didn't create this upscaler, I simply downloaded it from a random link on reddit and uploaded here as I couldn't find it anywhere else.
|
Dsfajardob/rl_course_vizdoom_health_gathering_supreme
|
Dsfajardob
| 2023-04-26T21:45:35Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-26T20:57:34Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.04 +/- 5.36
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Dsfajardob/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
jkorstad/a2c-PandaReachDense-v2
|
jkorstad
| 2023-04-26T21:18:32Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-26T05:03:19Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.03 +/- 1.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jmartin233/bloom-1b7-lora-reading-comprehension
|
jmartin233
| 2023-04-26T20:50:10Z | 0 | 0 | null |
[
"text-generation",
"en",
"dataset:jmartin233/reading_comprehension_exercise_dataset_v2",
"license:bigscience-openrail-m",
"region:us"
] |
text-generation
| 2023-04-26T19:25:42Z |
---
license: bigscience-openrail-m
datasets:
- jmartin233/reading_comprehension_exercise_dataset_v2
language:
- en
pipeline_tag: text-generation
---
# Model Card for Model ID
The model generates short reading comprehension exercises for English teachers to use.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
Finetuned using this process: https://github.com/FourthBrain/Building-Generative-AI-Apps/blob/main/GenerAd-AI/notebooks/%F0%9F%92%AE%20GenerAd%20AI%F0%9F%92%AE%20Fine%20tuning%20BLOOM.ipynb
## Uses
English teachers can use the model to generate short texts that use specified types of grammar, and are written a specified level (beginner, intermediate or advanced.)
|
amitrajitbh1/distilroberta-base-finetuned-teen-2
|
amitrajitbh1
| 2023-04-26T20:49:20Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-04-26T20:14:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-teen-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-teen-2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0436
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.5736 | 1.0 | 157 | 3.3554 |
| 3.1559 | 2.0 | 314 | 3.1532 |
| 3.0252 | 3.0 | 471 | 3.0850 |
| 2.858 | 4.0 | 628 | 2.9401 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
JuanDa14Sa/Taxi-v3-QLearn
|
JuanDa14Sa
| 2023-04-26T20:42:28Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-26T20:30:22Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-QLearn
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="JuanDa14Sa/Taxi-v3-QLearn", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
akadhim-ai/sd_aiconos-model-v1-2_400
|
akadhim-ai
| 2023-04-26T20:37:30Z | 31 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"art",
"text-to-image",
"en",
"dataset:Ali-fb/ios_icons_2",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-04-26T14:02:51Z |
---
license: openrail
datasets:
- Ali-fb/ios_icons_2
language:
- en
metrics:
- accuracy
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
---
|
Pedrampedram/MarketMail-AI
|
Pedrampedram
| 2023-04-26T20:27:00Z | 0 | 0 | null |
[
"marketing",
"question-answering",
"en",
"dataset:Pedrampedram/MarketMail-AI-Dataset",
"license:openrail",
"region:us"
] |
question-answering
| 2023-04-26T20:00:26Z |
---
license: openrail
datasets:
- Pedrampedram/MarketMail-AI-Dataset
language:
- en
pipeline_tag: question-answering
tags:
- marketing
---
|
khadija267/distilbert-base-uncased-distilled-clinc
|
khadija267
| 2023-04-26T20:25:28Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-26T00:36:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.947741935483871
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2830
- Accuracy: 0.9477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.8723 | 1.0 | 318 | 2.8941 | 0.7461 |
| 2.2155 | 2.0 | 636 | 1.4516 | 0.8613 |
| 1.0985 | 3.0 | 954 | 0.7466 | 0.9152 |
| 0.5635 | 4.0 | 1272 | 0.4707 | 0.9358 |
| 0.3294 | 5.0 | 1590 | 0.3628 | 0.9429 |
| 0.221 | 6.0 | 1908 | 0.3173 | 0.9439 |
| 0.1671 | 7.0 | 2226 | 0.2968 | 0.9477 |
| 0.14 | 8.0 | 2544 | 0.2876 | 0.9484 |
| 0.1263 | 9.0 | 2862 | 0.2838 | 0.9471 |
| 0.1189 | 10.0 | 3180 | 0.2830 | 0.9477 |
### Framework versions
- Transformers 4.11.3
- Pytorch 2.0.0+cu118
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Dsfajardob/ppo-LunarLander-v2
|
Dsfajardob
| 2023-04-26T20:19:44Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"tensorboard",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-28T20:15:40Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.59 +/- 20.52
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
minoosh/ast-finetuned-audioset-10-10-0.4593-finetuned-ie
|
minoosh
| 2023-04-26T20:09:47Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-04-26T09:55:48Z |
---
license: bsd-3-clause
tags:
- generated_from_trainer
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-finetuned-ie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-finetuned-ie
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 3.0698
- eval_accuracy: 0.6076
- eval_runtime: 163.7462
- eval_samples_per_second: 7.579
- eval_steps_per_second: 0.953
- epoch: 18.08
- step: 1844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.11.0
- Tokenizers 0.13.2
|
khadija267/distilbert-base-uncased-finetuned-clinc
|
khadija267
| 2023-04-26T20:02:43Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-25T23:46:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9161290322580645
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7754
- Accuracy: 0.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2893 | 1.0 | 318 | 3.2831 | 0.7397 |
| 2.6289 | 2.0 | 636 | 1.8731 | 0.8345 |
| 1.5481 | 3.0 | 954 | 1.1580 | 0.89 |
| 1.0137 | 4.0 | 1272 | 0.8584 | 0.9077 |
| 0.7969 | 5.0 | 1590 | 0.7754 | 0.9161 |
### Framework versions
- Transformers 4.11.3
- Pytorch 2.0.0+cu118
- Datasets 1.16.1
- Tokenizers 0.10.3
|
yerx/videomae-base-finetuned-basketball-subset-20epochs
|
yerx
| 2023-04-26T20:01:14Z | 63 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-04-26T18:56:04Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-basketball-subset-20epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-basketball-subset-20epochs
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8785
- Accuracy: 0.1972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 4060
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2525 | 0.05 | 200 | 0.7720 | 0.52 |
| 0.8649 | 1.05 | 400 | 0.7721 | 0.48 |
| 1.0703 | 2.05 | 600 | 1.3605 | 0.52 |
| 0.606 | 3.05 | 800 | 1.0668 | 0.6 |
| 2.0221 | 4.05 | 1000 | 1.1741 | 0.56 |
| 1.2916 | 5.05 | 1200 | 1.4747 | 0.52 |
| 1.4861 | 6.05 | 1400 | 1.1454 | 0.6 |
| 1.3012 | 7.05 | 1600 | 1.6105 | 0.56 |
| 1.3327 | 8.05 | 1800 | 1.2343 | 0.52 |
| 2.077 | 9.05 | 2000 | 1.3243 | 0.6 |
| 1.2349 | 10.05 | 2200 | 1.2044 | 0.6 |
| 1.005 | 11.05 | 2400 | 1.6417 | 0.52 |
| 1.1622 | 12.05 | 2600 | 1.3058 | 0.56 |
| 0.8031 | 13.05 | 2800 | 0.6776 | 0.48 |
| 0.8588 | 14.05 | 3000 | 1.1644 | 0.64 |
| 0.8451 | 15.05 | 3200 | 0.8491 | 0.64 |
| 1.1336 | 16.05 | 3400 | 1.0237 | 0.6 |
| 1.5719 | 17.05 | 3600 | 1.0391 | 0.64 |
| 0.4892 | 18.05 | 3800 | 0.9995 | 0.64 |
| 1.2092 | 19.05 | 4000 | 0.9802 | 0.56 |
| 0.9784 | 20.01 | 4060 | 0.9771 | 0.56 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Dsfajardob/ppo-LunarLander-v2-U8
|
Dsfajardob
| 2023-04-26T19:56:33Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-26T19:56:25Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -218.11 +/- 142.77
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Dsfajardob/ppo-LunarLander-v2-U8'
'batch_size': 512
'minibatch_size': 128}
```
|
Anyayolp/t5-end2end-questions-generation
|
Anyayolp
| 2023-04-26T19:56:27Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:squad_modified_for_t5_qg",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-04-26T14:25:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_modified_for_t5_qg
model-index:
- name: t5-end2end-questions-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad_modified_for_t5_qg dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5884 | 0.34 | 100 | 1.9159 |
| 1.9705 | 0.68 | 200 | 1.7310 |
| 1.8439 | 1.02 | 300 | 1.6672 |
| 1.7426 | 1.35 | 400 | 1.6382 |
| 1.7147 | 1.69 | 500 | 1.6199 |
| 1.6908 | 2.03 | 600 | 1.6053 |
| 1.6315 | 2.37 | 700 | 1.5967 |
| 1.627 | 2.71 | 800 | 1.5939 |
| 1.6122 | 3.05 | 900 | 1.5877 |
| 1.5706 | 3.39 | 1000 | 1.5861 |
| 1.5708 | 3.73 | 1100 | 1.5742 |
| 1.5534 | 4.06 | 1200 | 1.5798 |
| 1.5351 | 4.4 | 1300 | 1.5738 |
| 1.5226 | 4.74 | 1400 | 1.5757 |
| 1.5187 | 5.08 | 1500 | 1.5727 |
| 1.4963 | 5.42 | 1600 | 1.5710 |
| 1.4841 | 5.76 | 1700 | 1.5668 |
| 1.5025 | 6.1 | 1800 | 1.5688 |
| 1.4778 | 6.44 | 1900 | 1.5717 |
| 1.4769 | 6.77 | 2000 | 1.5674 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
mrm8488/vit-base-patch16-224_finetuned-kvasirv2-colonoscopy
|
mrm8488
| 2023-04-26T19:55:30Z | 651 | 6 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"medical",
"colon",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
tags:
- image-classification
- pytorch
- medical
- colon
metrics:
- accuracy: 0.93
---
# Vision Transformer fine-tuned on kvasir_v2 for colonoscopy classification
## Demo
### Drag the following images to the widget to test the model
- 
- 
- 
- 
## Training
You can find the code [here](https://github.com/qanastek/HugsVision/blob/main/recipes/kvasir_v2/binary_classification/Kvasir_v2_Image_Classifier.ipynb)
## Metrics
```
precision recall f1-score support
dyed-lifted-polyps 0.95 0.93 0.94 60
dyed-resection-margins 0.97 0.95 0.96 64
esophagitis 0.93 0.79 0.85 67
normal-cecum 1.00 0.98 0.99 54
normal-pylorus 0.95 1.00 0.97 57
normal-z-line 0.82 0.93 0.87 67
polyps 0.92 0.92 0.92 52
ulcerative-colitis 0.93 0.95 0.94 59
accuracy 0.93 480
macro avg 0.93 0.93 0.93 480
weighted avg 0.93 0.93 0.93 480
```
## How to use
```py
from transformers import ViTFeatureExtractor, ViTForImageClassification
from hugsvision.inference.VisionClassifierInference import VisionClassifierInference
path = "mrm8488/vit-base-patch16-224_finetuned-kvasirv2-colonoscopy"
classifier = VisionClassifierInference(
feature_extractor = ViTFeatureExtractor.from_pretrained(path),
model = ViTForImageClassification.from_pretrained(path),
)
img = "Your image path"
label = classifier.predict(img_path=img)
print("Predicted class:", label)
```
> Disclaimer: This model was trained for research only
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
gmongaras/gpt-anime-sub-1.3B
|
gmongaras
| 2023-04-26T19:53:42Z | 142 | 5 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt_neo",
"text-generation",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-02-01T05:26:48Z |
---
license: openrail
---
This model is fintuned using the following model:
https://huggingface.co/EleutherAI/gpt-neo-1.3B
The data was scraped from:
https://www.kitsunekko.net/dirlist.php?dir=subtitles%2F
To load, use:
model = pipeline('text-generation',model="gmongaras/gpt-anime-sub-1.3B",
tokenizer="EleutherAI/gpt-neo-1.3B")
|
Callidior/bert2bert-base-arxiv-titlegen
|
Callidior
| 2023-04-26T19:42:59Z | 163 | 13 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"encoder-decoder",
"text2text-generation",
"summarization",
"en",
"dataset:arxiv_dataset",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:04Z |
---
language:
- en
tags:
- summarization
license: apache-2.0
datasets:
- arxiv_dataset
metrics:
- rouge
widget:
- text: "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data."
---
# Paper Title Generator
Generates titles for computer science papers given an abstract.
The model is a BERT2BERT Encoder-Decoder using the official `bert-base-uncased` checkpoint as initialization for the encoder and decoder.
It was fine-tuned on 318,500 computer science papers posted on arXiv.org between 2007 and 2022 and achieved a 26.3% Rouge2 F1-Score on held-out validation data.
**Live Demo:** [https://paper-titles.ey.r.appspot.com/](https://paper-titles.ey.r.appspot.com/)
|
PanEa/dolly-v2-gptj-enhanced-auto-gptq
|
PanEa
| 2023-04-26T19:16:11Z | 6 | 1 |
transformers
|
[
"transformers",
"gptj",
"text-generation",
"en",
"dataset:vicgalle/alpaca-gpt4",
"dataset:databricks/databricks-dolly-15k",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-26T08:08:50Z |
---
license: afl-3.0
datasets:
- vicgalle/alpaca-gpt4
- databricks/databricks-dolly-15k
language:
- en
metrics:
- perplexity
pipeline_tag: text-generation
---
|
JgnMama/Erniee
|
JgnMama
| 2023-04-26T19:15:08Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-26T19:12:31Z |
---
license: creativeml-openrail-m
---
|
Pennyyyyy/t5-end2end-questions-generation
|
Pennyyyyy
| 2023-04-26T19:10:40Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:squad_modified_for_t5_qg",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-04-26T13:24:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_modified_for_t5_qg
model-index:
- name: t5-end2end-questions-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad_modified_for_t5_qg dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5884 | 0.34 | 100 | 1.9159 |
| 1.9705 | 0.68 | 200 | 1.7310 |
| 1.8439 | 1.02 | 300 | 1.6672 |
| 1.7426 | 1.35 | 400 | 1.6382 |
| 1.7147 | 1.69 | 500 | 1.6199 |
| 1.6908 | 2.03 | 600 | 1.6053 |
| 1.6315 | 2.37 | 700 | 1.5967 |
| 1.627 | 2.71 | 800 | 1.5939 |
| 1.6122 | 3.05 | 900 | 1.5877 |
| 1.5706 | 3.39 | 1000 | 1.5861 |
| 1.5708 | 3.73 | 1100 | 1.5742 |
| 1.5534 | 4.06 | 1200 | 1.5798 |
| 1.5351 | 4.4 | 1300 | 1.5738 |
| 1.5226 | 4.74 | 1400 | 1.5757 |
| 1.5187 | 5.08 | 1500 | 1.5727 |
| 1.4963 | 5.42 | 1600 | 1.5710 |
| 1.4841 | 5.76 | 1700 | 1.5668 |
| 1.5025 | 6.1 | 1800 | 1.5688 |
| 1.4778 | 6.44 | 1900 | 1.5717 |
| 1.4769 | 6.77 | 2000 | 1.5674 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Sergendel/a2c-PandaReachDense-v2
|
Sergendel
| 2023-04-26T19:08:07Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-26T19:05:30Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.47 +/- 0.30
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
anilkumar2444/a2c-AntBulletEnv-v0
|
anilkumar2444
| 2023-04-26T19:06:19Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-26T19:05:13Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 838.75 +/- 152.64
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sdesai/narrativa-finetuned-wmt22-en-pt-br-brwac
|
sdesai
| 2023-04-26T18:58:42Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-04-26T18:43:19Z |
---
tags:
- generated_from_trainer
model-index:
- name: narrativa-finetuned-wmt22-en-pt-br-brwac
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# narrativa-finetuned-wmt22-en-pt-br-brwac
This model is a fine-tuned version of [Narrativa/mbart-large-50-finetuned-opus-en-pt-translation](https://huggingface.co/Narrativa/mbart-large-50-finetuned-opus-en-pt-translation) on an unknown dataset.
Also added brwac.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
- epoch = 3.0
- eval_bleu = 63.9813
- eval_gen_len = 12.3215
- eval_loss = 0.4894
- eval_runtime = 0:00:20.01
- eval_samples = 190
- eval_samples_per_second = 9.492
- eval_steps_per_second = 2.398
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ericrincon/Qtable_taxi
|
ericrincon
| 2023-04-26T18:52:52Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-26T18:52:50Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Qtable_taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ericrincon/Qtable_taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ericrincon/Frozen_Lake_4x4
|
ericrincon
| 2023-04-26T18:50:48Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-26T18:50:46Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Frozen_Lake_4x4
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ericrincon/Frozen_Lake_4x4", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
email81227/poca-SoccerTwos
|
email81227
| 2023-04-26T18:38:17Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-04-26T18:07:55Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: email81227/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
LecJackS/distilbert-base-uncased-finetuned-emotion
|
LecJackS
| 2023-04-26T18:36:36Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-26T18:23:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9244610483889744
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2193
- Accuracy: 0.9245
- F1: 0.9245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8598 | 1.0 | 250 | 0.3274 | 0.9005 | 0.8966 |
| 0.2584 | 2.0 | 500 | 0.2193 | 0.9245 | 0.9245 |
### Framework versions
- Transformers 4.13.0
- Pytorch 2.0.0+cu118
- Datasets 2.8.0
- Tokenizers 0.10.3
|
Asif782/lora
|
Asif782
| 2023-04-26T18:36:19Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-26T18:11:22Z |
---
license: creativeml-openrail-m
---
|
KigenCHESS/eng-sw_translation
|
KigenCHESS
| 2023-04-26T18:33:41Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-04-26T18:30:31Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: KigenCHESS/eng-sw_translation
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# KigenCHESS/eng-sw_translation
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-sw](https://huggingface.co/Helsinki-NLP/opus-mt-en-sw) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5372
- Validation Loss: 0.6656
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 424, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.9069 | 0.7022 | 0 |
| 0.5372 | 0.6656 | 1 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
Raiden-1001/poca-SoccerTwos
|
Raiden-1001
| 2023-04-26T18:27:48Z | 89 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-03-23T05:59:23Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Raiden-1001/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
dkerja/songhyekyo
|
dkerja
| 2023-04-26T18:14:39Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-26T18:13:08Z |
---
license: creativeml-openrail-m
---
|
Pranjalya/a2c-AntBulletEnv-v0
|
Pranjalya
| 2023-04-26T18:09:15Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-26T18:08:06Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1829.43 +/- 423.83
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
freya48/nekoda
|
freya48
| 2023-04-26T18:08:00Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-26T18:03:06Z |
---
license: creativeml-openrail-m
---
|
kaisar-barlybay-sse/qard-bert-base-multilingual-uncased_6
|
kaisar-barlybay-sse
| 2023-04-26T17:52:59Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2023-04-26T14:51:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: qard-bert-base-multilingual-uncased_6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qard-bert-base-multilingual-uncased_6
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3863
- Accuracy: 0.3074
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.399 | 1.0 | 501 | 1.3862 | 0.3373 |
| 1.3921 | 2.0 | 1002 | 1.3862 | 0.3613 |
| 1.3905 | 3.0 | 1503 | 1.3863 | 0.3273 |
| 1.3903 | 4.0 | 2004 | 1.3863 | 0.2455 |
| 1.3904 | 5.0 | 2505 | 1.3863 | 0.2834 |
| 1.3898 | 6.0 | 3006 | 1.3863 | 0.3074 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
kaisar-barlybay-sse/qard-distilbert-base-multilingual-cased_6
|
kaisar-barlybay-sse
| 2023-04-26T17:33:35Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"multiple-choice",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2023-04-26T14:19:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: qard-distilbert-base-multilingual-cased_6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qard-distilbert-base-multilingual-cased_6
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3254
- Accuracy: 0.4331
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3738 | 1.0 | 501 | 1.3361 | 0.4152 |
| 1.2866 | 2.0 | 1002 | 1.3259 | 0.4212 |
| 1.0942 | 3.0 | 1503 | 1.3760 | 0.4391 |
| 0.8393 | 4.0 | 2004 | 1.6132 | 0.4291 |
| 0.6062 | 5.0 | 2505 | 1.8334 | 0.4391 |
| 0.4319 | 6.0 | 3006 | 2.3254 | 0.4331 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
YchKhan/EloQuence
|
YchKhan
| 2023-04-26T17:30:58Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-04-26T17:22:36Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shahukareem/bulhaa-cat
|
shahukareem
| 2023-04-26T17:10:07Z | 31 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"stable-diffusion",
"text-to-image",
"diffusion-models-class",
"dreambooth-hackathon",
"animal",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-01-19T15:15:29Z |
---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- animal
widget:
- text: A cute and adorable photo of a cat
---
# DreamBooth model for the naseemee concept trained by shahukareem on the shahukareem/cat dataset.
This is a Stable Diffusion model fine-tuned on the bulhaa concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of bulhaa cat**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `cat` images for the animal theme.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('shahukareem/bulhaa-cat')
image = pipeline().images[0]
image
```
|
ericrincon/LunarLander-v2
|
ericrincon
| 2023-04-26T16:51:34Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-26T16:44:07Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.64 +/- 23.25
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sophiebottani/distilbert_squad_newsqa
|
sophiebottani
| 2023-04-26T16:22:54Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:newsqa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-04-26T07:12:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert_squad_newsqa
results: []
datasets:
- newsqa
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_squad_newsqa
This model is a fine-tuned version of [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad) on the NewsQA dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.7181 | 1.0 | 6730 | 1.6477 |
| 1.4932 | 2.0 | 13460 | 1.6274 |
| 1.4426 | 3.0 | 20190 | 1.6247 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.1+cu102
- Datasets 2.9.0
- Tokenizers 0.13.2
|
kaisar-barlybay-sse/qard-bert-base-multilingual-cased_6
|
kaisar-barlybay-sse
| 2023-04-26T15:57:26Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2023-04-26T11:43:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: qard-bert-base-multilingual-cased_6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qard-bert-base-multilingual-cased_6
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3526
- Accuracy: 0.3453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3904 | 1.0 | 501 | 1.3831 | 0.3473 |
| 1.3997 | 2.0 | 1002 | 1.3863 | 0.3234 |
| 1.3937 | 3.0 | 1503 | 1.3865 | 0.2395 |
| 1.3912 | 4.0 | 2004 | 1.3862 | 0.3253 |
| 1.3919 | 5.0 | 2505 | 1.3861 | 0.3633 |
| 1.3713 | 6.0 | 3006 | 1.3526 | 0.3453 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Sergendel/ppo-SnowballTarget
|
Sergendel
| 2023-04-26T15:33:35Z | 8 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-04-26T15:33:30Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: Sergendel/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
bobLi/autotrain-burp-52899124622
|
bobLi
| 2023-04-26T15:30:07Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:bobLi/autotrain-data-burp",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-26T15:28:52Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- bobLi/autotrain-data-burp
co2_eq_emissions:
emissions: 0.004479786338858913
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 52899124622
- CO2 Emissions (in grams): 0.0045
## Validation Metrics
- Loss: 0.000
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/bobLi/autotrain-burp-52899124622
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("bobLi/autotrain-burp-52899124622", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("bobLi/autotrain-burp-52899124622", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
GANYANG/gpt-2
|
GANYANG
| 2023-04-26T15:24:20Z | 0 | 0 | null |
[
"pytorch",
"tensorboard",
"generated_from_trainer",
"license:mit",
"region:us"
] | null | 2023-04-12T03:39:20Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.837 | 0.51 | 200 | 1.7257 |
| 1.7602 | 1.03 | 400 | 1.6777 |
| 1.7341 | 1.54 | 600 | 1.6604 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.0
- Datasets 2.10.1
- Tokenizers 0.13.3
|
Supparesk/t5-end2end-questions-generation
|
Supparesk
| 2023-04-26T15:09:36Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:squad_modified_for_t5_qg",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-04-24T17:56:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_modified_for_t5_qg
model-index:
- name: t5-end2end-questions-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad_modified_for_t5_qg dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5888 | 0.34 | 100 | 1.9194 |
| 1.9722 | 0.68 | 200 | 1.7316 |
| 1.8479 | 1.02 | 300 | 1.6689 |
| 1.7478 | 1.35 | 400 | 1.6409 |
| 1.7204 | 1.69 | 500 | 1.6268 |
| 1.6986 | 2.03 | 600 | 1.6105 |
| 1.6437 | 2.37 | 700 | 1.6007 |
| 1.639 | 2.71 | 800 | 1.5952 |
| 1.6261 | 3.05 | 900 | 1.5909 |
| 1.5915 | 3.39 | 1000 | 1.5861 |
| 1.5917 | 3.73 | 1100 | 1.5829 |
| 1.5772 | 4.06 | 1200 | 1.5788 |
| 1.5697 | 4.4 | 1300 | 1.5800 |
| 1.557 | 4.74 | 1400 | 1.5791 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
gaussalgo/MiniLM-L6-v2-Canard-Fullwiki
|
gaussalgo
| 2023-04-26T15:03:24Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-04-26T15:02:58Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1988 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 500,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
ckallur/ppo-LunarLander-v2
|
ckallur
| 2023-04-26T14:59:53Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-26T14:59:23Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 258.44 +/- 19.98
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
lorenzoncina/whisper-medium-ru
|
lorenzoncina
| 2023-04-26T14:59:33Z | 34 | 7 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"ru",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-04-07T15:58:48Z |
---
language:
- ru
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Medium Russian
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: mozilla-foundation/common_voice_11_0 ru
type: mozilla-foundation/common_voice_11_0
config: ru
split: test
args: ru
metrics:
- type: wer
value: 7.562437929892964
name: Wer
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: google/fleurs
type: google/fleurs
config: ru_ru
split: test
metrics:
- type: wer
value: 10.92
name: WER
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium Russian
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 ru dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2253
- Wer: 7.5624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1578 | 0.1 | 1000 | 0.1662 | 8.8290 |
| 0.045 | 1.08 | 2000 | 0.1748 | 8.9148 |
| 0.0176 | 2.06 | 3000 | 0.1889 | 8.7848 |
| 0.0104 | 3.04 | 4000 | 0.1922 | 8.4354 |
| 0.0051 | 4.02 | 5000 | 0.2034 | 8.1865 |
| 0.0047 | 4.12 | 6000 | 0.2012 | 8.0455 |
| 0.0018 | 5.1 | 7000 | 0.2117 | 7.6237 |
| 0.0004 | 6.08 | 8000 | 0.2177 | 7.6078 |
| 0.0003 | 7.06 | 9000 | 0.2244 | 7.6262 |
| 0.0002 | 8.04 | 10000 | 0.2253 | 7.5624 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.11.1.dev0
- Tokenizers 0.13.2
|
Carlosrelao/Reinforce-CartPole1
|
Carlosrelao
| 2023-04-26T14:53:34Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-26T14:53:22Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 459.40 +/- 121.80
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
murlina/ppo-LunarLander-v2
|
murlina
| 2023-04-26T14:45:14Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-25T17:12:35Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 243.30 +/- 32.31
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gRaphael/ppo-LunarLander-v2
|
gRaphael
| 2023-04-26T14:36:49Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-26T14:27:38Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.49 +/- 14.91
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MiniMinMax/Reinforce-Pixelcopter
|
MiniMinMax
| 2023-04-26T14:32:03Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-26T14:31:59Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 42.40 +/- 24.45
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
doarisono/Doar
|
doarisono
| 2023-04-26T14:28:16Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-26T14:13:06Z |
---
license: creativeml-openrail-m
---
|
Ashfaq60/Ashfaq
|
Ashfaq60
| 2023-04-26T14:27:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-04-26T13:07:28Z |
---
license: artistic-2.0
---hi
|
rawmt/finetuning-sentiment-model-3000-samples
|
rawmt
| 2023-04-26T14:24:05Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-26T14:10:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3114
- Accuracy: 0.8733
- F1: 0.8742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
|
uisikdag/42000news_turkish_bert_uncased_finetune
|
uisikdag
| 2023-04-26T14:21:30Z | 185 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-14T14:37:45Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: umit_42000news_bertuncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# umit_42000news_bertuncased
This model is a fine-tuned version of [dbmdz/bert-base-turkish-uncased](https://huggingface.co/dbmdz/bert-base-turkish-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Sinsinnati/hf_workshop_extra
|
Sinsinnati
| 2023-04-26T14:18:43Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"text-classification",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-26T14:08:49Z |
---
pipeline_tag: text-classification
widget:
- text: He loves learning new things.
- text: I go to university every day.
---
|
BlueAvenir/proseiben_events_activities_announcements
|
BlueAvenir
| 2023-04-26T14:17:55Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-04-26T14:17:33Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 653 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 653,
"warmup_steps": 66,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
ndhieunguyen/Reinforce-PixelCopter
|
ndhieunguyen
| 2023-04-26T14:15:26Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-24T01:44:42Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 46.20 +/- 34.66
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Dewa/ppo-Lunar_rl-v5
|
Dewa
| 2023-04-26T14:01:40Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-26T14:00:55Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -114.16 +/- 28.04
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 100000
'learning_rate': 0.004
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.92
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Dewa/ppo-Lunar_rl-v5'
'batch_size': 512
'minibatch_size': 128}
```
|
Dewa/dqn-SpaceInvadersNoFrameskip-v4-version-6
|
Dewa
| 2023-04-26T13:39:48Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-26T12:50:28Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 274.50 +/- 31.50
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Dewa -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Dewa -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Dewa
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
PaulineSanchez/autotrain-translation_food_english_to_french-52830124391
|
PaulineSanchez
| 2023-04-26T13:36:23Z | 223 | 2 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain",
"translation",
"en",
"fr",
"dataset:PaulineSanchez/autotrain-data-translation_food_english_to_french",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-04-26T13:14:44Z |
---
tags:
- autotrain
- translation
language:
- en
- fr
datasets:
- PaulineSanchez/autotrain-data-translation_food_english_to_french
co2_eq_emissions:
emissions: 8.23780867881086
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 52830124391
- CO2 Emissions (in grams): 8.2378
## Validation Metrics
- Loss: 0.539
- SacreBLEU: 61.476
- Gen len: 12.913
|
Sergendel/Reinforce-PixelCopter_v2
|
Sergendel
| 2023-04-26T13:35:08Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-26T13:35:05Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter_v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 30.60 +/- 30.90
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Yonadav/summarization_t5base_en_to_kjven
|
Yonadav
| 2023-04-26T13:32:17Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-04-26T07:40:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: summarization_t5base_en_to_kjven
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarization_t5base_en_to_kjven
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8324
- Bleu: 21.2143
- Gen Len: 18.1685
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.0735 | 1.0 | 2860 | 0.9479 | 21.3913 | 18.1219 |
| 0.9776 | 2.0 | 5720 | 0.8750 | 22.1711 | 18.1307 |
| 0.918 | 3.0 | 8580 | 0.8317 | 22.6915 | 18.1381 |
| 0.8741 | 4.0 | 11440 | 0.8039 | 23.0856 | 18.1468 |
| 0.8489 | 5.0 | 14300 | 0.7841 | 23.3573 | 18.1455 |
| 0.8169 | 6.0 | 17160 | 0.7664 | 23.5073 | 18.1493 |
| 0.7965 | 7.0 | 20020 | 0.7532 | 23.6919 | 18.1495 |
| 0.78 | 8.0 | 22880 | 0.7411 | 23.8445 | 18.1461 |
| 0.7568 | 9.0 | 25740 | 0.7338 | 23.86 | 18.155 |
| 0.7496 | 10.0 | 28600 | 0.7228 | 23.953 | 18.1511 |
| 0.7411 | 11.0 | 31460 | 0.7175 | 24.0327 | 18.1511 |
| 0.8376 | 12.0 | 34320 | 0.8114 | 23.311 | 18.1319 |
| 1.1918 | 13.0 | 37180 | 0.9686 | 21.5339 | 18.1185 |
| 1.0929 | 14.0 | 40040 | 0.8978 | 21.561 | 18.1455 |
| 1.0373 | 15.0 | 42900 | 0.8617 | 21.4942 | 18.1542 |
| 1.0165 | 16.0 | 45760 | 0.8432 | 21.3962 | 18.1595 |
| 0.9973 | 17.0 | 48620 | 0.8340 | 21.2558 | 18.166 |
| 0.9889 | 18.0 | 51480 | 0.8326 | 21.2238 | 18.1687 |
| 0.9909 | 19.0 | 54340 | 0.8325 | 21.2216 | 18.1688 |
| 0.9942 | 20.0 | 57200 | 0.8324 | 21.2143 | 18.1685 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
josu/gpt-neo-1.3B-instruction
|
josu
| 2023-04-26T13:27:25Z | 19 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"pt",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-14T20:56:15Z |
---
language:
- pt
widget:
- text: Explique o que é inteligência artificial.
- text: Explique o que é processamento de linguagem natural.
---
``` python
from transformers import GenerationConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("josu/gpt-neo-1.3B-instruction")
tokenizer = AutoTokenizer.from_pretrained("josu/gpt-neo-1.3B-instruction")
def generate_prompt(instruction, input=None):
if input:
return f"""Abaixo está uma instrução que descreve uma tarefa, juntamente com uma entrada que fornece mais contexto. Escreva uma resposta que complete adequadamente o pedido.
### Instrução:
{instruction}
### Entrada:
{input}
### Resposta:"""
else:
return f"""Abaixo está uma instrução que descreve uma tarefa. Escreva uma resposta que complete adequadamente o pedido.
### Instrução:
{instruction}
### Resposta:"""
generation_config = GenerationConfig(
temperature=0.2,
top_p=0.75,
num_beams=4,
)
def evaluate(instruction, input=None):
prompt = generate_prompt(instruction, input)
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].cuda()
generation_output = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=256
)
content = []
for s in generation_output.sequences:
output = tokenizer.decode(s)
content.append(output.split("### Resposta:")[1].strip())
return content
```
|
UsuallyPoncho/ppo-Huggy
|
UsuallyPoncho
| 2023-04-26T13:22:22Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-04-26T13:22:16Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: UsuallyPoncho/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
XdSlams/fjhqgwkjwehhrfgir28
|
XdSlams
| 2023-04-26T13:06:23Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-26T07:09:24Z |
---
license: creativeml-openrail-m
---
|
LarryAIDraw/jessicaGranblue_v10
|
LarryAIDraw
| 2023-04-26T13:06:13Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-26T12:59:23Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/50740/jessica-or-granblue-fantasy
|
LarryAIDraw/eremiteScorching_v10
|
LarryAIDraw
| 2023-04-26T13:05:56Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-26T12:59:04Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/50612/eremite-scorching-loremaster-genshin-impact
|
LarryAIDraw/towerOfFantasyFiona_v10
|
LarryAIDraw
| 2023-04-26T13:05:10Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-26T12:57:55Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/50216/tower-of-fantasy-fiona
|
LarryAIDraw/yamashiroAzurLane_v10
|
LarryAIDraw
| 2023-04-26T13:04:41Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-26T12:57:08Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/50909/yamashiroazur-lane
|
LarryAIDraw/sanjounoHaruhimeDunmachi_v10
|
LarryAIDraw
| 2023-04-26T13:04:13Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-26T12:56:21Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/51211/sanjouno-haruhime-dunmachi
|
LarryAIDraw/7thMarchHonkaiStar_v10
|
LarryAIDraw
| 2023-04-26T13:04:01Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-26T12:55:59Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/50469/7th-march-honkai-star-rail
|
alibidaran/Symptom2disease
|
alibidaran
| 2023-04-26T12:51:01Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-26T11:19:00Z |
---
license: apache-2.0
pipeline_tag: text-classification
---
|
aravind-selvam/donut_finetuned_chart
|
aravind-selvam
| 2023-04-26T12:49:39Z | 53 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-04-17T12:33:36Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: donut_finetuned_chart
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut_finetuned_chart
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on an chart images dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4957
- Cer: 0.2318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4943 | 1.0 | 166 | 0.6634 | 0.2341 |
| 0.475 | 2.0 | 333 | 0.5370 | 0.2320 |
| 0.3009 | 3.0 | 500 | 0.5051 | 0.2318 |
| 0.2611 | 3.98 | 664 | 0.4957 | 0.2318 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
DTorregrosa/sd-class-butterflies-64
|
DTorregrosa
| 2023-04-26T12:49:08Z | 36 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-04-26T12:48:19Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('DTorregrosa/sd-class-butterflies-64')
image = pipeline().images[0]
image
```
|
Oleksandr2003/QA_model
|
Oleksandr2003
| 2023-04-26T12:48:53Z | 29 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-01-29T15:51:42Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: QA_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# QA_model
This model is a fine-tuned version of [ukr-models/xlm-roberta-base-uk](https://huggingface.co/ukr-models/xlm-roberta-base-uk) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4719 | 1.0 | 620 | 1.3108 |
| 1.4047 | 2.0 | 1240 | 1.1630 |
| 1.1245 | 3.0 | 1860 | 1.1429 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Dewa/dqn-SpaceInvadersNoFrameskip-v4-version-5
|
Dewa
| 2023-04-26T12:43:46Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-26T12:43:11Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 14.50 +/- 12.34
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Dewa -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Dewa -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Dewa
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Ubenwa/sb-ecapa-vggsound
|
Ubenwa
| 2023-04-26T12:37:18Z | 7 | 1 |
speechbrain
|
[
"speechbrain",
"embeddings",
"Sound",
"pytorch",
"ECAPA-TDNN",
"TDNN",
"audio-classification",
"en",
"dataset:VGGSound",
"arxiv:2106.04624",
"license:apache-2.0",
"region:us"
] |
audio-classification
| 2023-01-06T01:25:21Z |
---
language: "en"
thumbnail:
tags:
- speechbrain
- embeddings
- Sound
- pytorch
- ECAPA-TDNN
- TDNN
- audio-classification
license: "apache-2.0"
datasets:
- VGGSound
metrics:
- Accuracy
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Sound Recognition with ECAPA embeddings on VGGSound
This repository provides all the necessary tools to perform sound recognition with SpeechBrain using a model pretrained on VGGSound.
For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
The given model performance on the test set is:
| Release | Error Rate (%)
|:-------------:|:--------------:|
| 28-02-23 | 42.8 |
#### Referencing ECAPA
```@inproceedings{DBLP:conf/interspeech/DesplanquesTD20,
author = {Brecht Desplanques and
Jenthe Thienpondt and
Kris Demuynck},
editor = {Helen Meng and
Bo Xu and
Thomas Fang Zheng},
title = {{ECAPA-TDNN:} Emphasized Channel Attention, Propagation and Aggregation
in {TDNN} Based Speaker Verification},
booktitle = {Interspeech 2020},
pages = {3830--3834},
publisher = {{ISCA}},
year = {2020},
}
```
#### Referencing VGGSound
```@inproceedings{chen2020vggsound,
title={Vggsound: A large-scale audio-visual dataset},
author={Chen, Honglie and Xie, Weidi and Vedaldi, Andrea and Zisserman, Andrew},
booktitle={ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={721--725},
year={2020},
organization={IEEE}
}
```
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
|
jorgefedzhedz/distilbert-base-uncased-finetuned-cola
|
jorgefedzhedz
| 2023-04-26T12:33:20Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-26T12:09:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.541934635424655
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8224
- Matthews Correlation: 0.5419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5231 | 1.0 | 535 | 0.5305 | 0.4003 |
| 0.348 | 2.0 | 1070 | 0.5013 | 0.4885 |
| 0.2353 | 3.0 | 1605 | 0.5578 | 0.5299 |
| 0.1846 | 4.0 | 2140 | 0.7711 | 0.5176 |
| 0.1363 | 5.0 | 2675 | 0.8224 | 0.5419 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
metrosir/sd
|
metrosir
| 2023-04-26T12:32:27Z | 49 | 0 |
transformers
|
[
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2023-04-25T13:01:04Z |
# Chill Watcher
consider deploy on:
- huggingface inference point
- replicate api
- lightning.ai
# platform comparison
> all support autoscaling
|platform|prediction speed|charges|deploy handiness|
|-|-|-|-|
|huggingface|fast:20s|high:$0.6/hr (without autoscaling)|easy:git push|
|replicate|fast if used frequently: 30s, slow if needs initialization: 5min|low: $0.02 per generation|difficult: build image and upload|
|lightning.ai|fast with app running: 20s, slow if idle: XXs|low: free $30 per month, $0.18 per init, $0.02 per run|easy: one command|
# platform deploy options
## huggingface
> [docs](https://huggingface.co/docs/inference-endpoints/guides/custom_handler)
- requirements: use pip packages in `requirements.txt`
- `init()` and `predict()` function: use `handler.py`, implement the `EndpointHandler` class
- more: modify `handler.py` for requests and inference and explore more highly-customized features
- deploy: git (lfs) push to huggingface repository(the whole directory including models and weights, etc.), and use inference endpoints to deploy. Click and deploy automaticly, very simple.
- call api: use the url provide by inference endpoints after endpoint is ready(build, initialize and in a "running" state), make a post request to the url using request schema definied in the `handler.py`
## replicate
> [docs](https://replicate.com/docs/guides/push-a-model)
- requirements: specify all requirements(pip packages, system packages, python version, cuda, etc.) in `cog.yaml`
- `init()` and `predict()` function: use `predict.py`, implement the `Predictor` class
- more: modify `predict.py`
- deploy:
1. get a linux GPU machine with 60GB disk space;
2. install [cog](https://replicate.com/docs/guides/push-a-model) and [docker](https://docs.docker.com/engine/install/ubuntu/#set-up-the-repository)
3. `git pull` the current repository from huggingface, including large model files
4. after `predict.py` and `cog.yaml` is correctly coded, run `cog login`, `cog push`, then cog will build a docker image locally and push the image to replicate. As the image could take 30GB or so disk space, it would cost a lot network bandwidth.
- call api: if everything runs successfully and the docker image is pushed to replicate, you will see a web-ui and an API example directly in your replicate repository
## lightning.ai
> docs: [code](https://lightning.ai/docs/app/stable/levels/basic/real_lightning_component_implementations.html), [deploy](https://lightning.ai/docs/app/stable/workflows/run_app_on_cloud/)
- requirements:
- pip packages are listed in `requirements.txt`, note that some requirements are different from those in huggingface, and you need to modify some lines in `requirements.txt` according to the comment in the `requirements.txt`
- other pip packages, system packages and some big model weight files download commands, can be listed using a custom build config. Checkout `class CustomBuildConfig(BuildConfig)` in `app.py`. In a custom build config you can use many linux commands such as `wget` and `sudo apt-get update`. The custom build config will be executed on the `__init__()` of the `PythonServer` class
- `init()` and `predict()` function: use `app.py`, implement the `PythonServer` class. Note:
- some packages haven't been installed when the file is called(these packages may be installed when `__init__()` is called), so some import code should be in the function, not at the top of the file, or you may get import errors.
- you can't save your own value to `PythonServer.self` unless it's predifined in the variables, so don't assign any self-defined variables to `self`
- if you use the custom build config, you should implement `PythonServer`'s `__init()__` yourself, so don't forget to use the correct function signature
- more: ...
- deploy:
- `pip install lightning`
- prepare the directory on your local computer(no need to have a GPU)
- list big files in the `.lightningignore` file to avoid big file upload and save deploy time cost
- run `lightning run app app.py --cloud` in the local terminal, and it will upload the files in the directory to lightning cloud, and start deploying on the cloud
- check error logs on the web-ui, use `all logs`
- call api: only if the app starts successfully, you can see a valid url in the `settings` page of the web-ui. Open that url, and you can see the api
### some stackoverflow:
install docker:
- https://docs.docker.com/engine/install/ubuntu/#set-up-the-repository
install git-lfs:
- https://github.com/git-lfs/git-lfs/blob/main/INSTALLING.md
linux:
```
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
sudo apt-get install git-lfs
```
---
license: apache-2.0
---
|
Dewa/pixelcopter_rl-v4
|
Dewa
| 2023-04-26T12:31:11Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-26T12:31:06Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: pixelcopter_rl-v4
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 22.20 +/- 15.35
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
mtc/mbart-newsum
|
mtc
| 2023-04-26T12:29:27Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-29T13:47:33Z |
# grizzled-interest-2023-03-29
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the mtc/newsum2021 dataset.
It achieves the following results on the test set:
- Loss: 3.5178
- Rouge1: 31.4512
- Rouge2: 11.0965
- Rougel: 21.5021
- Rougelsum: 28.634
- Gen Len: 75.755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- lr_scheduler_warmup_steps: 500
- training_steps: 8000
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 3.6815 | 0.89 | 500 | 3.5617 | 29.5414 | 10.5201 | 20.056 | 27.2581 | 86.07 |
| 3.4132 | 1.79 | 1000 | 3.4133 | 29.6393 | 9.9545 | 19.6903 | 27.0861 | 96.545 |
| 3.198 | 2.68 | 1500 | 3.3693 | 29.8614 | 10.4517 | 20.1728 | 27.3879 | 94.31 |
| 3.0292 | 3.58 | 2000 | 3.3370 | 30.6444 | 11.5935 | 21.1955 | 28.2699 | 87.355 |
| 2.901 | 4.47 | 2500 | 3.3440 | 30.7453 | 11.111 | 21.2076 | 28.269 | 88.365 |
| 2.7832 | 5.37 | 3000 | 3.3758 | 30.4995 | 10.9025 | 20.6601 | 28.0575 | 104.655 |
| 2.6965 | 6.26 | 3500 | 3.3793 | 31.2287 | 11.5544 | 21.1909 | 28.738 | 88.47 |
| 2.6475 | 7.16 | 4000 | 3.4083 | 32.0341 | 11.9417 | 22.2785 | 29.2495 | 84.095 |
| 2.6196 | 8.05 | 4500 | 3.4007 | 30.8963 | 11.3811 | 21.3146 | 28.3222 | 90.875 |
| 2.5574 | 8.94 | 5000 | 3.4104 | 32.3867 | 12.0469 | 21.9831 | 29.5205 | 87.46 |
| 2.4977 | 9.84 | 5500 | 3.4340 | 32.5857 | 12.5072 | 22.6288 | 30.1168 | 79.87 |
| 2.4362 | 10.73 | 6000 | 3.4626 | 31.9121 | 11.8577 | 22.3647 | 29.3822 | 85.17 |
| 2.3977 | 11.63 | 6500 | 3.4737 | 32.0202 | 12.0413 | 22.5237 | 29.5166 | 77.905 |
| 2.369 | 12.52 | 7000 | 3.4890 | 31.2516 | 11.3416 | 21.5711 | 28.5465 | 85.605 |
| 2.3446 | 13.42 | 7500 | 3.4949 | 32.1277 | 11.6876 | 22.0244 | 29.2239 | 83.895 |
| 2.3295 | 14.31 | 8000 | 3.4976 | 31.8729 | 11.629 | 21.9629 | 28.9948 | 84.47 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.0.dev20230220+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
|
vega6000/distilgpt2-finetuned-medical
|
vega6000
| 2023-04-26T12:22:34Z | 188 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-26T09:26:10Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-medical
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-medical
This model is a fine-tuned version of [vega6000/distilgpt2-finetuned-medical](https://huggingface.co/vega6000/distilgpt2-finetuned-medical) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 15 | 2.0817 |
| No log | 2.0 | 30 | 1.9431 |
| No log | 3.0 | 45 | 1.8487 |
| No log | 4.0 | 60 | 1.7761 |
| No log | 5.0 | 75 | 1.7253 |
| No log | 6.0 | 90 | 1.6875 |
| No log | 7.0 | 105 | 1.6574 |
| No log | 8.0 | 120 | 1.6385 |
| No log | 9.0 | 135 | 1.6288 |
| No log | 10.0 | 150 | 1.6248 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Zexois36/tokyolagi
|
Zexois36
| 2023-04-26T12:13:52Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-26T12:11:45Z |
---
license: creativeml-openrail-m
---
|
DTorregrosa/sd-class-butterflies-32
|
DTorregrosa
| 2023-04-26T12:11:54Z | 32 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-04-26T12:11:41Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('DTorregrosa/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
pszemraj/pegasus-large-book-summary
|
pszemraj
| 2023-04-26T12:01:32Z | 119 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"pegasus",
"text2text-generation",
"summarization",
"en",
"dataset:kmfoda/booksum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- summarization
- pegasus
license: apache-2.0
datasets:
- kmfoda/booksum
metrics:
- rouge
widget:
- text: "large earthquakes along a given fault segment do not occur at random intervals because it takes time to accumulate the strain energy for the rupture. The rates at which tectonic plates move and accumulate strain at their boundaries are approximately uniform. Therefore, in first approximation, one may expect that large ruptures of the same fault segment will occur at approximately constant time intervals. If subsequent main shocks have different amounts of slip across the fault, then the recurrence time may vary, and the basic idea of periodic mainshocks must be modified. For great plate boundary ruptures the length and slip often vary by a factor of 2. Along the southern segment of the San Andreas fault the recurrence interval is 145 years with variations of several decades. The smaller the standard deviation of the average recurrence interval, the more specific could be the long term prediction of a future mainshock."
example_title: "earthquakes"
- text: " A typical feed-forward neural field algorithm. Spatiotemporal coordinates are fed into a neural network that predicts values in the reconstructed domain. Then, this domain is mapped to the sensor domain where sensor measurements are available as supervision. Class and Section Problems Addressed Generalization (Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid Representations (Section 3) Computation & memory efficiency, representation capacity, editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section 5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section 6) Edit ability, constraints, regularization. Table 2: The five classes of techniques in the neural field toolbox each addresses problems that arise in learning, inference, and control. (Section 3). We can supervise reconstruction via differentiable forward maps that transform Or project our domain (e.g, 3D reconstruction via 2D images; Section 4) With appropriate network architecture choices, we can overcome neural network spectral biases (blurriness) and efficiently compute derivatives and integrals (Section 5). Finally, we can manipulate neural fields to add constraints and regularizations, and to achieve editable representations (Section 6). Collectively, these classes constitute a 'toolbox' of techniques to help solve problems with neural fields There are three components in a conditional neural field: (1) An encoder or inference function € that outputs the conditioning latent variable 2 given an observation 0 E(0) =2. 2 is typically a low-dimensional vector, and is often referred to aS a latent code Or feature code_ (2) A mapping function 4 between Z and neural field parameters O: Y(z) = O; (3) The neural field itself $. The encoder € finds the most probable z given the observations O: argmaxz P(2/0). The decoder maximizes the inverse conditional probability to find the most probable 0 given Z: arg- max P(Olz). We discuss different encoding schemes with different optimality guarantees (Section 2.1.1), both global and local conditioning (Section 2.1.2), and different mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable prior over the sur- face in its reconstruction domain to generalize to the partial observations. A neural network expresses a prior via the function space of its architecture and parameters 0, and generalization is influenced by the inductive bias of this function space (Section 5)."
example_title: "scientific paper"
- text: " the big variety of data coming from diverse sources is one of the key properties of the big data phenomenon. It is, therefore, beneficial to understand how data is generated in various environments and scenarios, before looking at what should be done with this data and how to design the best possible architecture to accomplish this The evolution of IT architectures, described in Chapter 2, means that the data is no longer processed by a few big monolith systems, but rather by a group of services In parallel to the processing layer, the underlying data storage has also changed and became more distributed This, in turn, required a significant paradigm shift as the traditional approach to transactions (ACID) could no longer be supported. On top of this, cloud computing is becoming a major approach with the benefits of reducing costs and providing on-demand scalability but at the same time introducing concerns about privacy, data ownership, etc In the meantime the Internet continues its exponential growth: Every day both structured and unstructured data is published and available for processing: To achieve competitive advantage companies have to relate their corporate resources to external services, e.g. financial markets, weather forecasts, social media, etc While several of the sites provide some sort of API to access the data in a more orderly fashion; countless sources require advanced web mining and Natural Language Processing (NLP) processing techniques: Advances in science push researchers to construct new instruments for observing the universe O conducting experiments to understand even better the laws of physics and other domains. Every year humans have at their disposal new telescopes, space probes, particle accelerators, etc These instruments generate huge streams of data, which need to be stored and analyzed. The constant drive for efficiency in the industry motivates the introduction of new automation techniques and process optimization: This could not be done without analyzing the precise data that describe these processes. As more and more human tasks are automated, machines provide rich data sets, which can be analyzed in real-time to drive efficiency to new levels. Finally, it is now evident that the growth of the Internet of Things is becoming a major source of data. More and more of the devices are equipped with significant computational power and can generate a continuous data stream from their sensors. In the subsequent sections of this chapter, we will look at the domains described above to see what they generate in terms of data sets. We will compare the volumes but will also look at what is characteristic and important from their respective points of view. 3.1 The Internet is undoubtedly the largest database ever created by humans. While several well described; cleaned, and structured data sets have been made available through this medium, most of the resources are of an ambiguous, unstructured, incomplete or even erroneous nature. Still, several examples in the areas such as opinion mining, social media analysis, e-governance, etc, clearly show the potential lying in these resources. Those who can successfully mine and interpret the Internet data can gain unique insight and competitive advantage in their business An important area of data analytics on the edge of corporate IT and the Internet is Web Analytics."
example_title: "data science textbook"
inference:
parameters:
max_length: 64
no_repeat_ngram_size: 2
encoder_no_repeat_ngram_size: 3
repetition_penalty: 2.4
length_penalty: 0.5
num_beams: 4
early_stopping: True
---
# checkpoints
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on the [booksum](https://github.com/salesforce/booksum) dataset.
## Model description
More information needed
## Intended uses & limitations
- standard pegasus has a max input length of 1024 tokens, therefore the model only saw the first 1024 tokens of a chapter when training, and learned to try to make the chapter's summary from that. Keep this in mind when using this model, as information at the end of a text sequence longer than 1024 tokens may be excluded from the final summary/the model will be biased towards information presented first.
- this was only trained on the dataset for an epoch but still provides reasonable results.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.10.3
|
kahkasha/distilbert-base-uncased-finetuned-squad
|
kahkasha
| 2023-04-26T12:01:28Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-04-25T11:55:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.219 | 1.0 | 5533 | 1.1625 |
| 0.9573 | 2.0 | 11066 | 1.1382 |
| 0.755 | 3.0 | 16599 | 1.1594 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
iamjoy/ppo-Huggy-01
|
iamjoy
| 2023-04-26T11:58:32Z | 16 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-04-26T11:58:25Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: iamjoy/ppo-Huggy-01
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
seanghay/mt5-small-km-phoneme-reverse
|
seanghay
| 2023-04-26T11:41:52Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-04-26T10:45:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-km-phoneme-reverse
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-km-phoneme-reverse
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2416
- Rouge1: 30.9064
- Rouge2: 15.5474
- Rougel: 30.6746
- Rougelsum: 30.691
- Gen Len: 4.8282
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.6872 | 1.0 | 2515 | 1.3202 | 28.8555 | 13.7602 | 28.6841 | 28.7043 | 4.6996 |
| 1.5052 | 2.0 | 5030 | 1.2561 | 30.5921 | 15.3773 | 30.3685 | 30.3818 | 4.8390 |
| 1.5144 | 3.0 | 7545 | 1.2416 | 30.9064 | 15.5474 | 30.6746 | 30.691 | 4.8282 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Stern5497/sBert-swiss-legal-base
|
Stern5497
| 2023-04-26T11:41:09Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-04-26T09:24:47Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 14247 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 5000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1424,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Dqcky/Gabagtha
|
Dqcky
| 2023-04-26T11:40:48Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-26T11:39:04Z |
---
license: creativeml-openrail-m
---
|
jcrOrganisation/ppo-pyramids
|
jcrOrganisation
| 2023-04-26T11:40:20Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-04-26T11:40:14Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: jcrOrganisation/ppo-pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
NiamaLynn/sd-class-butterflies-32
|
NiamaLynn
| 2023-04-26T11:24:50Z | 37 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-04-26T11:24:35Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('NiamaLynn/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
pythonist/bert-base-cased-healthdemomodel
|
pythonist
| 2023-04-26T11:23:33Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-04-26T11:21:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-healthdemomodel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-healthdemomodel
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.5819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 6.1760 |
| No log | 2.0 | 2 | 6.1161 |
| No log | 3.0 | 3 | 6.0619 |
| No log | 4.0 | 4 | 6.0120 |
| No log | 5.0 | 5 | 5.9641 |
| No log | 6.0 | 6 | 5.9177 |
| No log | 7.0 | 7 | 5.8738 |
| No log | 8.0 | 8 | 5.8334 |
| No log | 9.0 | 9 | 5.7938 |
| No log | 10.0 | 10 | 5.7589 |
| No log | 11.0 | 11 | 5.7289 |
| No log | 12.0 | 12 | 5.7019 |
| No log | 13.0 | 13 | 5.6746 |
| No log | 14.0 | 14 | 5.6499 |
| No log | 15.0 | 15 | 5.6293 |
| No log | 16.0 | 16 | 5.6122 |
| No log | 17.0 | 17 | 5.5995 |
| No log | 18.0 | 18 | 5.5905 |
| No log | 19.0 | 19 | 5.5848 |
| No log | 20.0 | 20 | 5.5819 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
worsty/dqn-SpaceInvadersNoFrameskip-v4-test6
|
worsty
| 2023-04-26T11:21:32Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-26T11:14:42Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 582.00 +/- 170.93
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga worsty -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga worsty -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga worsty
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.