modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-01 18:27:28
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 532
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-01 18:27:19
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
FvH14/wav2vec2-XLSR-53-DutchCommonVoice12
|
FvH14
| 2023-03-30T19:41:27Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_12_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-03-29T16:10:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_12_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-nl-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_12_0
type: common_voice_12_0
config: nl
split: test
args: nl
metrics:
- name: Wer
type: wer
value: 0.579253889386658
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-nl-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_12_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6772
- Wer: 0.5793
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.3349 | 0.52 | 250 | 0.6772 | 0.5793 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
StanleyRoberts/Nix
|
StanleyRoberts
| 2023-03-30T19:39:04Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"text generation",
"conversational",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-03-30T13:02:29Z |
---
license: creativeml-openrail-m
language:
- en
thumbnail:
tags:
- text generation
- conversational
inference: true
---
# Pygmalion 6B
## Model description
This is a fork of Pygmalion allowing longer input lengths for text-generation tasks using the Inference API
Pymalion 6B is a proof-of-concept dialogue model based on EleutherAI's [GPT-J-6B](https://huggingface.co/EleutherAI/gpt-j-6B).
**Warning:** This model is **NOT** suitable for use by minors. It **will** output X-rated content under certain circumstances.
## Training data
The fine-tuning dataset consisted of 56MB of dialogue data gathered from multiple sources, which includes both real _and_ partially machine-generated conversations.
## Training procedure
Model weights were initialized from the `uft-6b` ConvoGPT model made available in [this commit](https://huggingface.co/hakurei/convogpt/tree/41b67bfddb6cd97070ffddf708e9720c9cb8d224/6b-uft).
The model was then further fine-tuned on ~48.5 million tokens for ~5k steps on 4 NVIDIA A40s using DeepSpeed.
## Intended use
### The easy way
We provide a notebook with a Gradio UI for playing around with the model without having to manually format inputs. This notebook can be found [here](https://github.com/PygmalionAI/gradio-ui/blob/master/notebooks/GPU.ipynb).
### The manual way
The model can be used as a regular text generation model, but it'll perform best if the input prompt adheres to the following format:
```
[CHARACTER]'s Persona: [A few sentences about the character you want the model to play]
<START>
[DIALOGUE HISTORY]
You: [Your input message here]
[CHARACTER]:
```
Where `[CHARACTER]` is, as you can probably guess, the name of the character you want the model to portray, `<START>` should be used verbatim as a delimiter token to separate persona and scenario data from the dialogue, and `[DIALOGUE HISTORY]` is chat history so the model can have some conversational context to draw from. Ideally it'll be pairs of messages like:
```
[CHARACTER]: [some dialogue here]
You: [your response to the dialogue above]
```
Apart from chat history, you can also just add example conversations in `[DIALOGUE HISTORY]` to show how the character should speak - ideally at the beginning, so it doesn't get confused as to what's conversation history vs. character definition.
## Known issues
We haven't played around with the model enough to enumerate them. Feel free to give us some feedback!
|
abcdgebop/moimad
|
abcdgebop
| 2023-03-30T19:32:40Z | 33 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-30T19:20:05Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### moimad Dreambooth model trained by abcdgebop with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
myklicious/dqn-SpaceInvadersNoFrameskip-v4
|
myklicious
| 2023-03-30T19:28:31Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-30T19:24:59Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 462.00 +/- 136.07
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga myklicious -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga myklicious -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga myklicious
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('buffer_size', 10000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 3000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 8),
('normalize', False)])
```
|
mfidabel/ppo-LunarLander-v2
|
mfidabel
| 2023-03-30T19:24:58Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-30T19:24:43Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.91 +/- 24.88
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Seif/Reinforce-Reinforce-CartPole-v1
|
Seif
| 2023-03-30T19:05:20Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-30T19:05:11Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
FvH14/wav2vec2-large-xls-r-300m-cnh-colab
|
FvH14
| 2023-03-30T19:04:40Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_9_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-03-30T18:49:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_9_0
model-index:
- name: wav2vec2-large-xls-r-300m-cnh-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-cnh-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_9_0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Jojodecay/hollowknight3D_test
|
Jojodecay
| 2023-03-30T18:55:12Z | 4 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-30T18:02:25Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### hollowknight3D-jojodecay Dreambooth model trained by Jojodecay with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:





|
inigo99/clasificador-rotten-tomatoes
|
inigo99
| 2023-03-30T18:51:18Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"classification",
"generated_from_trainer",
"dataset:rotten_tomatoes",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-30T18:50:36Z |
---
license: apache-2.0
tags:
- classification
- generated_from_trainer
datasets:
- rotten_tomatoes
metrics:
- accuracy
model-index:
- name: clasificador-rotten-tomatoes
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: rotten_tomatoes
type: rotten_tomatoes
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8527204502814258
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-rotten-tomatoes
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the rotten_tomatoes dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8343
- Accuracy: 0.8527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3971 | 1.0 | 1067 | 0.4166 | 0.8377 |
| 0.2056 | 2.0 | 2134 | 0.7931 | 0.8218 |
| 0.0672 | 3.0 | 3201 | 0.8343 | 0.8527 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
aegrif/CIS6930_DAAGR_T5_NoEmo
|
aegrif
| 2023-03-30T18:46:24Z | 128 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-30T02:15:44Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: CIS6930_DAAGR_T5_NoEmo
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# CIS6930_DAAGR_T5_NoEmo
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3368
- Train Accuracy: 0.9629
- Validation Loss: 0.4438
- Validation Accuracy: 0.9496
- Epoch: 17
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.5062 | 0.9405 | 0.4590 | 0.9454 | 0 |
| 0.4381 | 0.9479 | 0.4477 | 0.9472 | 1 |
| 0.4249 | 0.9499 | 0.4423 | 0.9481 | 2 |
| 0.4152 | 0.9513 | 0.4386 | 0.9486 | 3 |
| 0.4071 | 0.9525 | 0.4365 | 0.9490 | 4 |
| 0.4000 | 0.9535 | 0.4349 | 0.9493 | 5 |
| 0.3935 | 0.9545 | 0.4338 | 0.9496 | 6 |
| 0.3876 | 0.9553 | 0.4337 | 0.9498 | 7 |
| 0.3816 | 0.9562 | 0.4338 | 0.9498 | 8 |
| 0.3763 | 0.9571 | 0.4343 | 0.9499 | 9 |
| 0.3708 | 0.9578 | 0.4338 | 0.9500 | 10 |
| 0.3657 | 0.9586 | 0.4357 | 0.9498 | 11 |
| 0.3605 | 0.9593 | 0.4355 | 0.9500 | 12 |
| 0.3556 | 0.9601 | 0.4370 | 0.9499 | 13 |
| 0.3507 | 0.9608 | 0.4380 | 0.9499 | 14 |
| 0.3463 | 0.9615 | 0.4397 | 0.9498 | 15 |
| 0.3413 | 0.9622 | 0.4427 | 0.9496 | 16 |
| 0.3368 | 0.9629 | 0.4438 | 0.9496 | 17 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.11.0
- Datasets 2.11.0
- Tokenizers 0.13.2
|
inigo99/clasificador-poem-sentiment
|
inigo99
| 2023-03-30T18:36:07Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"classification",
"generated_from_trainer",
"dataset:poem_sentiment",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-30T18:35:16Z |
---
license: apache-2.0
tags:
- classification
- generated_from_trainer
datasets:
- poem_sentiment
metrics:
- accuracy
model-index:
- name: clasificador-poem-sentiment
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: poem_sentiment
type: poem_sentiment
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8653846153846154
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-poem-sentiment
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the poem_sentiment dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5413
- Accuracy: 0.8654
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 112 | 0.4332 | 0.8654 |
| No log | 2.0 | 224 | 0.4227 | 0.8942 |
| No log | 3.0 | 336 | 0.5413 | 0.8654 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
abhijitkalta/distilbert-base-uncased-finetuned-emotion
|
abhijitkalta
| 2023-03-30T18:21:12Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-30T17:57:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.933
- name: F1
type: f1
value: 0.9334700183474604
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1626
- Accuracy: 0.933
- F1: 0.9335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2254 | 1.0 | 250 | 0.1806 | 0.922 | 0.9219 |
| 0.1394 | 2.0 | 500 | 0.1626 | 0.933 | 0.9335 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
kenasuka/raisa-2
|
kenasuka
| 2023-03-30T18:14:21Z | 32 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-30T18:04:28Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### raisa-2 Dreambooth model trained by kenasuka with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
susindhar/aiproject-bert-qa
|
susindhar
| 2023-03-30T17:47:38Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-03-30T17:47:25Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: aiproject-bert-qa
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# aiproject-bert-qa
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.4990
- Validation Loss: 1.1426
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'Adam', 'config': {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.4990 | 1.1426 | 0 |
### Framework versions
- Transformers 4.28.0.dev0
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.2
|
BoschAI/Reinforce-pixelcopter
|
BoschAI
| 2023-03-30T17:43:01Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-28T22:40:28Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 24.20 +/- 17.40
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ManarAli/Reinforce-pixelcopter
|
ManarAli
| 2023-03-30T17:29:48Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-28T22:06:37Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 28.90 +/- 21.64
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
auditi41/wav2vec2-large-xlsr-53-Bangla-Common_Voice
|
auditi41
| 2023-03-30T16:36:24Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-03-30T08:05:10Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr-53-Bangla-Common_Voice
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: bn
split: train+validation
args: bn
metrics:
- name: Wer
type: wer
value: 0.6576650727705051
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-Bangla-Common_Voice
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6172
- Wer: 0.6577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.922 | 4.57 | 800 | 0.7379 | 0.8157 |
| 0.5136 | 9.14 | 1600 | 0.6155 | 0.7056 |
| 0.2759 | 13.71 | 2400 | 0.6172 | 0.6577 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
yngbless/yngblass
|
yngbless
| 2023-03-30T16:32:02Z | 33 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-30T16:21:46Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### yngblass Dreambooth model trained by yngbless with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
ghassenhannachi/a2c-PandaReachDense-v2
|
ghassenhannachi
| 2023-03-30T16:29:20Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-29T16:49:21Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.45 +/- 0.13
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
artbreguez/poca-SoccerTwos
|
artbreguez
| 2023-03-30T16:23:50Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-03-30T16:23:44Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: artbreguez/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
LarryAIDraw/aCertainScientificRailgun4in1_v1
|
LarryAIDraw
| 2023-03-30T16:19:26Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-30T16:18:24Z |
---
license: creativeml-openrail-m
---
|
ronanki/all_mpnet_128_10_MNR_PT
|
ronanki
| 2023-03-30T16:16:13Z | 7 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-03-30T16:06:25Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 449 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 449,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Brizape/Variome_5e-05_30_03
|
Brizape
| 2023-03-30T16:12:35Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-03-30T15:41:07Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Variome_5e-05_30_03
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Variome_5e-05_30_03
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0631
- Precision: 0.5610
- Recall: 0.5068
- F1: 0.5325
- Accuracy: 0.9859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.6582 | 0.51 | 25 | 0.1765 | 0.0 | 0.0 | 0.0 | 0.9769 |
| 0.1545 | 1.02 | 50 | 0.1746 | 0.0 | 0.0 | 0.0 | 0.9769 |
| 0.1544 | 1.53 | 75 | 0.1770 | 0.0 | 0.0 | 0.0 | 0.9769 |
| 0.1608 | 2.04 | 100 | 0.1752 | 0.0 | 0.0 | 0.0 | 0.9769 |
| 0.1552 | 2.55 | 125 | 0.1726 | 0.0 | 0.0 | 0.0 | 0.9769 |
| 0.1591 | 3.06 | 150 | 0.1582 | 0.0 | 0.0 | 0.0 | 0.9769 |
| 0.1185 | 3.57 | 175 | 0.1142 | 0.2978 | 0.0703 | 0.1138 | 0.9778 |
| 0.0979 | 4.08 | 200 | 0.1046 | 0.2865 | 0.1584 | 0.2041 | 0.9792 |
| 0.0889 | 4.59 | 225 | 0.0923 | 0.3965 | 0.2151 | 0.2789 | 0.9811 |
| 0.0749 | 5.1 | 250 | 0.0819 | 0.4126 | 0.3295 | 0.3664 | 0.9827 |
| 0.0622 | 5.61 | 275 | 0.0756 | 0.4497 | 0.3987 | 0.4227 | 0.9838 |
| 0.0635 | 6.12 | 300 | 0.0699 | 0.4970 | 0.4355 | 0.4642 | 0.9850 |
| 0.048 | 6.63 | 325 | 0.0672 | 0.5225 | 0.4512 | 0.4842 | 0.9852 |
| 0.0486 | 7.14 | 350 | 0.0663 | 0.5457 | 0.4827 | 0.5122 | 0.9852 |
| 0.0464 | 7.65 | 375 | 0.0666 | 0.5623 | 0.4879 | 0.5225 | 0.9856 |
| 0.043 | 8.16 | 400 | 0.0636 | 0.5464 | 0.5005 | 0.5225 | 0.9857 |
| 0.0393 | 8.67 | 425 | 0.0636 | 0.5693 | 0.4869 | 0.5249 | 0.9860 |
| 0.036 | 9.18 | 450 | 0.0636 | 0.5641 | 0.4942 | 0.5268 | 0.9858 |
| 0.0373 | 9.69 | 475 | 0.0637 | 0.5735 | 0.5037 | 0.5363 | 0.9860 |
| 0.0382 | 10.2 | 500 | 0.0631 | 0.5610 | 0.5068 | 0.5325 | 0.9859 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
facebook/blenderbot-1B-distill
|
facebook
| 2023-03-30T16:12:16Z | 1,553 | 37 |
transformers
|
[
"transformers",
"pytorch",
"blenderbot",
"text2text-generation",
"convAI",
"conversational",
"facebook",
"en",
"dataset:blended_skill_talk",
"arxiv:1907.06616",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- en
thumbnail:
tags:
- convAI
- conversational
- facebook
license: apache-2.0
datasets:
- blended_skill_talk
metrics:
- perplexity
---
## Model description
+ Paper: [Recipes for building an open-domain chatbot](https://arxiv.org/abs/1907.06616)
+ [Original PARLAI Code](https://parl.ai/projects/recipes/)
### Abstract
Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
|
pelinbalci/ppo-LunarLander-v2
|
pelinbalci
| 2023-03-30T15:31:34Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-30T15:31:08Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.94 +/- 16.33
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
educanto/my_awesome_model
|
educanto
| 2023-03-30T15:19:07Z | 53 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-30T13:40:45Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: educanto/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# educanto/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1332
- Validation Loss: 0.1916
- Train Accuracy: 0.9292
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2536 | 0.1928 | 0.9291 | 0 |
| 0.1332 | 0.1916 | 0.9292 | 1 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.11.0
- Datasets 2.11.0
- Tokenizers 0.13.2
|
EvaOr/DeepRL_chp5_MLAgents_PyramidsTraining
|
EvaOr
| 2023-03-30T15:10:27Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-03-30T15:10:21Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: EvaOr/DeepRL_chp5_MLAgents_PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kuanyk/robustness
|
kuanyk
| 2023-03-30T15:01:04Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-06T16:48:17Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: robustness
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
vocabtrimmer/mbart-large-cc25-koquad-qa-trimmed-ko-30000
|
vocabtrimmer
| 2023-03-30T14:58:19Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-30T14:37:08Z |
# Vocabulary Trimmed [lmqg/mbart-large-cc25-koquad-qa](https://huggingface.co/lmqg/mbart-large-cc25-koquad-qa): `vocabtrimmer/mbart-large-cc25-koquad-qa-trimmed-ko-30000`
This model is a trimmed version of [lmqg/mbart-large-cc25-koquad-qa](https://huggingface.co/lmqg/mbart-large-cc25-koquad-qa) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mbart-large-cc25-koquad-qa | vocabtrimmer/mbart-large-cc25-koquad-qa-trimmed-ko-30000 |
|:---------------------------|:----------------------------------|:-----------------------------------------------------------|
| parameter_size_full | 610,852,864 | 385,548,288 |
| parameter_size_embedding | 512,057,344 | 61,448,192 |
| vocab_size | 250,028 | 30,004 |
| compression_rate_full | 100.0 | 63.12 |
| compression_rate_embedding | 100.0 | 12.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ko | vocabtrimmer/mc4_validation | text | ko | validation | 30000 | 2 |
|
Agneev/Reinforce-PixelCopter
|
Agneev
| 2023-03-30T14:52:35Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-30T14:52:31Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 37.80 +/- 25.39
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
vijmeister/Reinforce-CartPole-v1
|
vijmeister
| 2023-03-30T14:43:18Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-30T14:43:10Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Sera47/q-FrozenLake-v1-4x4-noSlippery
|
Sera47
| 2023-03-30T14:39:09Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-30T14:39:00Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.74 +/- 0.44
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Sera47/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
OlgaVityuk/q-FrozenLake-v1-4x4-noSlippery
|
OlgaVityuk
| 2023-03-30T14:34:33Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-30T14:34:24Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="OlgaVityuk/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
vjsyong/my_awesome_model
|
vjsyong
| 2023-03-30T14:32:02Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-24T08:25:16Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: vjsyong/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vjsyong/my_awesome_model
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0536
- Validation Loss: 0.2951
- Train Accuracy: 0.9169
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.3400 | 0.2347 | 0.9037 | 0 |
| 0.1873 | 0.2533 | 0.9021 | 1 |
| 0.1064 | 0.2473 | 0.9156 | 2 |
| 0.0536 | 0.2951 | 0.9169 | 3 |
### Framework versions
- Transformers 4.27.3
- TensorFlow 2.10.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Niraya666/Reinforce-Pixelcopter-PLE-v0
|
Niraya666
| 2023-03-30T14:18:12Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-30T14:18:08Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 10.20 +/- 10.58
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
boudchicha/soluzione
|
boudchicha
| 2023-03-30T14:13:42Z | 0 | 1 |
diffusers
|
[
"diffusers",
"medical",
"chemistry",
"biology",
"conversational",
"en",
"fr",
"dataset:pubmed",
"dataset:medical_questions_pairs",
"dataset:wiki_bio",
"license:bsd",
"region:us"
] |
text-generation
| 2023-03-30T13:46:38Z |
---
license: bsd
datasets:
- pubmed
- medical_questions_pairs
- wiki_bio
language:
- en
- fr
metrics:
- accuracy
tags:
- medical
- chemistry
- biology
library_name: diffusers
pipeline_tag: conversational
---
|
sp02/distilbert-base-uncased-finetuned-emotion
|
sp02
| 2023-03-30T14:13:06Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-12T03:18:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9245103641171362
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2103
- Accuracy: 0.9245
- F1: 0.9245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8 | 1.0 | 250 | 0.3021 | 0.907 | 0.9052 |
| 0.2396 | 2.0 | 500 | 0.2103 | 0.9245 | 0.9245 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1
- Datasets 2.10.1
- Tokenizers 0.13.1
|
vocabtrimmer/mt5-small-squad-qa-trimmed-en-120000
|
vocabtrimmer
| 2023-03-30T14:11:14Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-30T13:49:54Z |
# Vocabulary Trimmed [lmqg/mt5-small-squad-qa](https://huggingface.co/lmqg/mt5-small-squad-qa): `vocabtrimmer/mt5-small-squad-qa-trimmed-en-120000`
This model is a trimmed version of [lmqg/mt5-small-squad-qa](https://huggingface.co/lmqg/mt5-small-squad-qa) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-squad-qa | vocabtrimmer/mt5-small-squad-qa-trimmed-en-120000 |
|:---------------------------|:--------------------------|:----------------------------------------------------|
| parameter_size_full | 300,165,504 | 166,944,128 |
| parameter_size_embedding | 256,103,424 | 122,882,048 |
| vocab_size | 250,101 | 120,002 |
| compression_rate_full | 100.0 | 55.62 |
| compression_rate_embedding | 100.0 | 47.98 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| en | vocabtrimmer/mc4_validation | text | en | validation | 120000 | 2 |
|
Corianas/64CharGPT
|
Corianas
| 2023-03-30T14:10:51Z | 3 | 0 |
transformers
|
[
"transformers",
"gpt2",
"text-generation",
"en",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-03-10T23:18:21Z |
---
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
Vocab is:
```
\n\" !$&'#,/+=-<>*@.:;[]^?0123456789abcdefghijklmnopqrstuvwxyzèé§↨
§ (made from alt+21) was used as end of file/sample
↨ (made from alt+23) is the shift key (and gets removed and the next character gets replaced with an uppdercase character)
```
Model is trained on scraped youtube subtitles and whispered transcripts of youtube/tv shows. totalling approx 2.3billion tokens when processed.
Data was Deduped, Had all UPPERCASE samples removed, and ran a 'ranker' that removed random data which somehow was included in the subtitles on youtube. (such as total gibberish)
Training took 72 hours, and was stopped when overfitting occured. (this is checkpoint 264000 out of a planned 400000)
```
gradient_accumulation_steps = 2 # used to simulate larger batch sizes
batch_size = 45 # if gradient_accumulation_steps > 1, this is the micro-batch size
block_size = 768
n_layer = 12
n_head = 8
n_embd = 512
dropout = 0.00001 # for pretraining 0 is good, for finetuning try 0.1+
bias = False # do we use bias inside LayerNorm and Linear layers?
learning_rate = 0.0008 # max learning rate
min_lr = 0.00008
```
function to fix text from the model:
```
def remove_caseifer(text):
new_text = ""
i = 0
while i < len(text):
if text[i] == "↨":
if i+1 < len(text):
new_text += text[i+1].upper()
i += 1
else:
pass # skip this index
else:
new_text += text[i]
i += 1
return new_text
```
function to prepare text for the model:
```
def add_caseifer(text):
uppers = 0
lowers = 0
tokenlist = set("\n\" !$&'#,/+=-<>*@.:;[]{}()^?0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzèé")
replace_map = { # Define a mapping of characters to be replaced
"{": "[",
"(": "[",
"}": "]",
")": "]"
}
upperlist = set("ABCDEFGHIJKLMNOPQRSTUVWXYZ")
lowerlist = set("abcdefghijklmnopqrstuvwxyz")
new_text = ""
for char in text:
if char in tokenlist:
if char in upperlist:
new_text += "↨" + char.lower()
elif char in replace_map:
new_text += replace_map[char]
else:
new_text += char
else:
continue
return new_text
```
|
vocabtrimmer/mbart-large-cc25-koquad-qa-trimmed-ko-10000
|
vocabtrimmer
| 2023-03-30T14:08:00Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-30T13:47:11Z |
# Vocabulary Trimmed [lmqg/mbart-large-cc25-koquad-qa](https://huggingface.co/lmqg/mbart-large-cc25-koquad-qa): `vocabtrimmer/mbart-large-cc25-koquad-qa-trimmed-ko-10000`
This model is a trimmed version of [lmqg/mbart-large-cc25-koquad-qa](https://huggingface.co/lmqg/mbart-large-cc25-koquad-qa) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mbart-large-cc25-koquad-qa | vocabtrimmer/mbart-large-cc25-koquad-qa-trimmed-ko-10000 |
|:---------------------------|:----------------------------------|:-----------------------------------------------------------|
| parameter_size_full | 610,852,864 | 365,068,288 |
| parameter_size_embedding | 512,057,344 | 20,488,192 |
| vocab_size | 250,028 | 10,004 |
| compression_rate_full | 100.0 | 59.76 |
| compression_rate_embedding | 100.0 | 4.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ko | vocabtrimmer/mc4_validation | text | ko | validation | 10000 | 2 |
|
wanghao2023/uganda-labor-market-interview-text-classification
|
wanghao2023
| 2023-03-30T14:05:08Z | 109 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-29T22:27:17Z |
---
language: en
license: mit
---
# Uganda Labor Market Interview Text Classification
This model is a fine-tuned [Roberta base model](https://huggingface.co/roberta-base) using text transcripts of interviews between Vocational Training Institutes (VTI) students and their successful alumni in Uganda on the subject of the labor market.
## Model description
The model classifies sentences into six distinct categories, with some sentences potentially being assigned to multiple topics. The classification criteria are as follows:
info: Pertinent details about the job market, working conditions, salaries, and expectations in the workplace, as well as the alumni's and students' current job market situations, career plans, and past experiences. If strategies are mentioned in this context, the sentence is also classified as a strategy.
tip: Advice on workplace behavior and self-improvement, primarily emphasizing discipline, humility, treating colleagues and clients well, and avoiding illegal activities. If these tips are associated with an increased likelihood of employment, the sentence is also classified as a strategy..
strategy: Guidance aimed at enhancing students' chances of securing employment or better job opportunities, covering aspects such as company research, application creation and submission, interview conduct, networking, and general advice for enhancing job-related skills. Additionally, this category includes tips for starting a business, such as capital accumulation, location scouting, business models, equipment procurement, and client attraction and retention.
motivation: General recommendations for maintaining confidence, patience, persistence, engagement, and optimism in the job market. If specific contexts are provided for these recommendations, the sentence may also be classified as a strategy or tip accordingly.
referral: Directing students to companies or individuals, or providing affirmative responses to students' requests for connections.
neutral: Introductions, contact exchanges, purely technical content, unrelated school or exam discussions, miscellaneous conversations that do not fit into the other five topics, and unclear content due to language deficiencies or translation issues.
### How to use
You can use this model directly with a pipeline for text classification:
```python
>>> from transformers import pipeline
>>> pipe = pipeline("text-classification", model= "wanghao2023/uganda-labor-market-interview-text-classification", tokenizer = "wanghao2023/uganda-labor-market-interview-text-classification", return_all_scores = True)
>>> pipe("if they think you know too much, they won't teach you.")
[[{'label': 'is_info', 'score': 0.18128268420696259},
{'label': 'is_tip', 'score': 0.5684323310852051},
{'label': 'is_strategy', 'score': 0.22818608582019806},
{'label': 'is_motivation', 'score': 0.03250108286738396},
{'label': 'is_neutral', 'score': 0.05972086638212204},
{'label': 'is_referral', 'score': 0.013502764515578747}]]
```
### Limitations and bias
Sentence classification is heavily dependent on context. For instance, the phrase "be patient" could be categorized as a tip, strategy, and/or motivation, depending on the specific context in which the alumni advises patience. The context determines whether the advice pertains to interviews, workplace behavior, or general motivation.
## Evaluation results
This model achieves the following results when tested on the validation dataset (multilabel, threshold = 0.3). There is a huge room for improvement but it performs much better than a dice roll at least:
| F1 | Roc Auc | Accuracy |
|:----:|:----:|:----:|
| 0.655779 | 0.799979 | 0.552670 |
|
miugod/bibert-iwslt14ende
|
miugod
| 2023-03-30T14:03:27Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-03-30T13:55:20Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bibert-ende
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bibert-ende
This model is a fine-tuned version of [jhu-clsp/bibert-ende](https://huggingface.co/jhu-clsp/bibert-ende) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8713
- Accuracy: 0.6310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.10.1
- Tokenizers 0.13.2
|
vocabtrimmer/mt5-small-squad-qa-trimmed-en-90000
|
vocabtrimmer
| 2023-03-30T13:41:58Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-30T13:22:10Z |
# Vocabulary Trimmed [lmqg/mt5-small-squad-qa](https://huggingface.co/lmqg/mt5-small-squad-qa): `vocabtrimmer/mt5-small-squad-qa-trimmed-en-90000`
This model is a trimmed version of [lmqg/mt5-small-squad-qa](https://huggingface.co/lmqg/mt5-small-squad-qa) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-squad-qa | vocabtrimmer/mt5-small-squad-qa-trimmed-en-90000 |
|:---------------------------|:--------------------------|:---------------------------------------------------|
| parameter_size_full | 300,165,504 | 136,224,128 |
| parameter_size_embedding | 256,103,424 | 92,162,048 |
| vocab_size | 250,101 | 90,002 |
| compression_rate_full | 100.0 | 45.38 |
| compression_rate_embedding | 100.0 | 35.99 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| en | vocabtrimmer/mc4_validation | text | en | validation | 90000 | 2 |
|
ccarvajal-reyes/beto-prescripciones-medicas
|
ccarvajal-reyes
| 2023-03-30T13:35:25Z | 140 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"es",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-12-17T20:26:26Z |
---
language:
- es
widget:
- text: "PARACETAMOL 500 MG COMPRIMIDO 1 COMPRIMIDO ORAL cada 6 horas durante 3 dias"
---
# beto-prescripciones-medicas
Fine-tunning [BETO](https://github.com/dccuchile/beto) for detection of entities in medical prescriptions. More models and detailes can be found [in our repository](https://github.com/camilocarvajalreyes/entidades-minsal).
This is a fine-tuned version of [bert-clinical-scratch-wl-es](https://huggingface.co/plncmm/bert-clinical-scratch-wl-es) from [PLN group @ CMM](https://huggingface.co/plncmm).
Which is a fine-tunned version [bert-base-spanish-wwm-uncased (BETO)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) from [DCC UChile](https://huggingface.co/dccuchile).
This work is part of a project that aims to have entity recognition models on prescription data from Minsal (Chile Health Minsistry), for the MDS7201 course from Data Science MSc program at UChile.
We use data from a Chilean Hospital, which is not available for public use, but we do provide the files with which we trained the models.
The procedure is the following one:
- We use a [model using regular expresions (RegEx)](https://github.com/camilocarvajalreyes/entidades-minsal/blob/main/datos/Etiquetado/RegExV2.0.ipynb) in order to tag around 100k unique samples from the original dataset.
- We fine-tune [bert-clinical-scratch-wl-es](https://huggingface.co/plncmm/bert-clinical-scratch-wl-es) using data tagged with the RegEx method. (5 epochs)
- We further fine-tune the model with data human tagged data (800 samples, 20 epochs).
- The data is tested on human tagged data (200 samples).
The resulting evaluation metrics are the following ones
| f1 | precision | recall |
|:---:|---|---|
| 0.93 | 0.92 | 0.94 |
**Collaborators**:
- Daniel Carmona G. (Ing. Civil Eléctrica)
- Martín Sepúlveda (Ing. Civil Eléctrica)
- Monserrat Prado (Ing. Civil en Ciencias de la Computación)
- Camilo Carvajal Reyes (Ing. Civil Matemática)
Supervised by:
- Patricio Wolff (Minsal)
- Constanza Contreras (Docente MDS7201)
- Francisco Förster (Docente MDS7201)
## Example
We provide a [demo](https://github.com/camilocarvajalreyes/entidades-minsal/blob/main/demo_minsal/demo.ipynb).
Here we introduce those funtions that are necessary in order to translate the model's output into understandable tags.
We also provide a complementary model: [beto-prescripciones-medicas-ADMIN](https://huggingface.co/ccarvajal/beto-prescripciones-medicas-ADMIN).
This model tags the output of the current model of those tokens tagged as ADMIN.
The [demo](https://github.com/camilocarvajalreyes/entidades-minsal/blob/main/demo_minsal/demo.ipynb) includes such model, and the output of both is shown as an example below:
| ACTIVE_PRINCIPLE | FORMA_FARMA | CANT-ADMIN | UND-ADMIN | VIA-ADMIN | PERIODICITY | DURATION |
|---:|---:|---:|---:|---:|---:|---:|
| PARACETAMOL | 500 MG COMPRIMIDO | 1 | COMPRIMIDO | ORAL | cada 6 horas | durante 3 dias |
This example is also shown in [this notebook](https://github.com/camilocarvajalreyes/entidades-minsal/blob/main/demo_minsal/demo_minimalista.ipynb), which uses the model as a blackbox.
## Reproducibility
Training parameters (fine-tunning on RegEx data):
```python
training_args = TrainingArguments(
output_dir="./results",
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=5,
weight_decay=0.01
)
```
Training parameters (fine-tunning on human tagged data)
```python
training_args = TrainingArguments(
output_dir = "./results",
evaluation_strategy = "epoch",
learning_rate = 2e-5,
per_device_train_batch_size = 16,
per_device_eval_batch_size = 16,
num_train_epochs = 20,
weight_decay = 0.01,
)
```
|
OedoSoldier/animix
|
OedoSoldier
| 2023-03-30T13:34:13Z | 0 | 97 | null |
[
"stable-diffusion",
"text-to-image",
"dataset:embed/EasyNegative",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-03-24T13:14:21Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
datasets: embed/EasyNegative
---
# Check the new [Ambientmix](https://huggingface.co/OedoSoldier/ambientmix)!
## Descriptions
Using this model will result in clean, anatomically-correct images that accurately capture the essence of anime-style art, complete with stunning backgrounds.
Two models are provided: an 18 MB LoRA model and a full base model that merges LoRA with Anything V4.5. The full model is recommended for training your character model, and is particularly effective for training anime characters using this model.
## Recommend settings:
- VAE: Orangemix (the same with NAI)
- LoRA Strength: 1 (if you're using the LoRA version)
- Sampler: DPM++ 2M Karras
- Sampling steps: 20
- Negative embedding: [EasyNegative](https://civitai.com/models/7808)、[badhandv4](https://civitai.com/models/16993/badhandv4-animeillustdiffusion)
## Samples
Note: all the LoRA name used in those samples are my local name, you need to change them to your saved LoRA filename!

```
masterpiece, best quality, 1girl, solo, light smile, mountain, lake, meadow, panorama, jacket, kneehighs, boots
Negative prompt: EasyNegative, badhandv4
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 738622193, Size: 576x768, Model hash: ad0e54efe2, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Hires upscale: 2.5, Hires steps: 10, Hires upscaler: SwinIR_4x
```

```
masterpiece, best quality, 1girl, solo, looking_at_viewer, smile, open_mouth, skirt, shirt, hair_ornament, pink_hair, jacket, pink_hair, :d, multicolored_hair, pleated_skirt, wings, choker, hairclip, hood, pink_eyes, hair_bun, chibi, black_shirt, double_bun, black_choker, blush, white_skirt, feathered_wings, angel_wings, white_wings, sky, flying, halo, hand up, skyscraper, angel, from top
Negative prompt: EasyNegative, badhandv4
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 4110544683, Size: 576x768, Model hash: ad0e54efe2, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Hires upscale: 2.5, Hires steps: 10, Hires upscaler: SwinIR_4x
```

```
masterpiece, best quality, 1girl, solo, looking away, expressionless, from side, white dress, colorful, floral background, rain, lake, fog, barefoot, sitting on water, from top,
Negative prompt: EasyNegative, badhandv4
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 223540873, Size: 576x768, Model hash: ad0e54efe2, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Hires upscale: 2.5, Hires steps: 10, Hires upscaler: SwinIR_4x
```

```
masterpiece, best quality,1girl, adjusting clothes, adjusting headwear, blush, bow, bowtie, breasts, brown eyes, brown hair, cloak, cloud, cloudy sky, crescent moon, dress, fantasy, flower, glowing, glowing flower, hat, light particles, lily pad, long hair, looking at viewer, moon, moonlight, mountain, mountainous horizon, night, outdoors, parted lips, pointy ears, pond, sky, small breasts, star (sky), starry sky, very long hair, wading, water lily flower, wind, witch, witch hat
Negative prompt: EasyNegative, badhandv4
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1795361781, Size: 512x768, Model hash: ad0e54efe2, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Hires upscale: 2.5, Hires steps: 10, Hires upscaler: SwinIR_4x
```

```
best quality, extremely detailed CG unity 8k,close up, illustration, depth of field,cowboy shot,the character is centered,symmetrical composition, (1 girl),red eyes,Wolf tail,Wolf ears,Very long hair ,Messy hair,disheveled hair, ,(beautiful detailed eyes),(Crown:1.1),pleated dress,puffy long sleeves, (moon:1.2), ((The black clouds)),(((flowing transparent black))),(floating black cloud:1.2),building architecture, depth of field,castle,black and white melt
Negative prompt: EasyNegative, badhandv4
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 4091383013, Size: 512x768, Model hash: ad0e54efe2, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Hires upscale: 2.5, Hires steps: 10, Hires upscaler: SwinIR_4x
```

```
masterpiece, best quality, 1man, solo, jacket, hand in pocket, school bag, black hair, black eyes, cyberpunk, street, machinery, motor vehicle, motorcycle, panorama, sunglasses
Negative prompt: EasyNegative, badhandv4
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 223585745, Size: 576x768, Model hash: ad0e54efe2, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Hires upscale: 2.5, Hires steps: 10, Hires upscaler: SwinIR_4x
```

```
masterpiece, best quality, 1boy, flat color, limited palette, low contrast, (ligne claire), long straight black hair, looking away, standing. smoke, night sky, city, sunset, sky scrapers, bridge, depth of field, black, red, orange, brown, autumn, haze,
Negative prompt: EasyNegative, badhandv4
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 646089941, Size: 576x768, Model hash: ad0e54efe2, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Hires upscale: 2.5, Hires steps: 10, Hires upscaler: SwinIR_4x
```

```
masterpiece, best quality, 1girl, smile, one eye closed, dutch angle, blonde hair, twintails, blue eyes, cowboy shot, maid dress, heart hands
Negative prompt: EasyNegative, badhandv4
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 278484553, Size: 512x768, Model hash: ad0e54efe2, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Hires upscale: 2.5, Hires steps: 10, Hires upscaler: SwinIR_4x
```

```
masterpiece, best quality, 1girl, solo, long_hair, looking_at_viewer, smile, dress, ribbon, jewelry, very_long_hair, hair_ribbon, flower, bracelet, two_side_up, hand_on_own_face, head_rest, hand_on_own_cheek
Negative prompt: EasyNegative, badhandv4
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1453509491, Size: 512x768, Model hash: ad0e54efe2, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Hires upscale: 2.5, Hires steps: 10, Hires upscaler: SwinIR_4x
```

```
masterpiece, best quality, 1girl, solo, black hair, medium hair, red eyes, blunt bangs, petite, expressionless, red skirt, white legwear, thighhighs, suspender skirt, white shirt, mary janes, night, dark, shadow
Negative prompt: EasyNegative, badhandv4
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 407943314, Size: 512x768, Model hash: ad0e54efe2, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Hires upscale: 2.5, Hires steps: 10, Hires upscaler: SwinIR_4x
```

```
masterpiece, best quality, 1girl, solo, long_hair, looking_at_viewer, white hair, red eyes, smile, bangs, skirt, shirt, long_sleeves, hat, dress, bow, holding, closed_mouth, flower, frills, hair_flower, petals, bouquet, holding_flower, center_frills, bonnet, holding_bouquet, flower field, flower field, colorful
Negative prompt: EasyNegative, badhandv4
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1690640466, Size: 576x768, Model hash: ad0e54efe2, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Hires upscale: 2.5, Hires steps: 10, Hires upscaler: SwinIR_4x
```

```
impasto,((((1girl)))),Metaverse,original,((an extremely delicate and beautiful)),(cyan theme),((intricate detail)),((((ultra-detailed))),((illustration)),(((masterpiece))),((extremely detailed CG unity 8k wallpaper)),highlight,sharpening,detailed face,((Perfect details)),(binary numbers),Science fiction,sense of digital,cold light,((data in the eyes)),((data adorns hair)),0 and 1 code,digitization,Running data,system screen,mathematical equation,young girl,(solo),(yubao)
Negative prompt: EasyNegative, badhandv4
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 293734715, Size: 576x768, Model hash: ad0e54efe2, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Hires upscale: 2.5, Hires steps: 10, Hires upscaler: SwinIR_4x
```

```
masterpiece, best quality, 1girl, solo, long_hair, from top, light smile, panorama, perspective, looking_at_viewer, bangs, skirt, shirt, black_hair, long_sleeves, bow, ribbon, twintails, hair_bow, heart, pantyhose, frills, shoes, choker, blunt_bangs, black_skirt, pink_eyes, frilled_skirt, pink_bow, platform_footwear, pink_theme, jirai_kei, full body, night, street, from behind, looking back, skyscraper, neon trim, panorama, perspective, starry sky, black theme, dark, shadow
Negative prompt: EasyNegative, badhandv4
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1148006396, Size: 576x768, Model hash: ad0e54efe2, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Hires upscale: 2.5, Hires steps: 10, Hires upscaler: SwinIR_4x
```

```
masterpiece,1girl,solo,incredibly absurdres,hoodie,headphones, street,outdoors,rain,neon lights, light smile, hood up, hands in pockets, looking away, from side
Negative prompt: EasyNegative, badhandv4
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3552918625, Size: 576x768, Model hash: ad0e54efe2, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Hires upscale: 2.5, Hires steps: 10, Hires upscaler: SwinIR_4x
```

```
masterpiece, best quality,1girl, solo, bangs, bare shoulders, bat wings, blonde hair, blush, breasts, bridal gauntlets, seductive smile, eyes visible through hair, fingernails, garter straps, hair ornament, long hair, looking at viewer, pointy ears, red eyes, small breasts, thighhighs, castle, vampire, white thighhighs, wings, night, standing, grin
Negative prompt: EasyNegative, badhandv4
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3007804048, Size: 576x768, Model hash: ad0e54efe2, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Hires upscale: 2.5, Hires steps: 10, Hires upscaler: SwinIR_4x
```

```
anirl, best quality, ultra high res, 1girl, hatsune miku, full body, scenery, smile, ocean, sunset, city, barefoot, footprints, sand
Negative prompt: EasyNegative, badhandv4
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3051695426, Size: 576x768, Model hash: ad0e54efe2, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Hires upscale: 2.5, Hires steps: 10, Hires upscaler: SwinIR_4x
```

## Models used
Merged with block weights tweaked:
- 2020s Anime Magazine Illustration Style
- Anime-like 2D (extracted LoRA)
- Anime Lineart Style
- Anime Screencap Style
- Avas Anime Hamster
- Epi Noise Offset
- Hipoly 3D Model Lora
- Makoto Shinkai Substyles
## See also
Original post on Civitai: https://civitai.com/models/23723
|
pastells/ppo-PyramidsRND
|
pastells
| 2023-03-30T13:32:38Z | 9 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-03-30T13:30:55Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: pastells/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
JYumeko/my_awesome_billsum_model
|
JYumeko
| 2023-03-30T13:31:42Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-30T05:39:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1538
- Rouge1: 0.1789
- Rouge2: 0.1075
- Rougel: 0.1585
- Rougelsum: 0.1584
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.2372 | 1.0 | 1635 | 0.1729 | 0.1764 | 0.1032 | 0.1554 | 0.1553 | 19.0 |
| 0.2077 | 2.0 | 3270 | 0.1602 | 0.1774 | 0.1054 | 0.1569 | 0.1567 | 19.0 |
| 0.197 | 3.0 | 4905 | 0.1550 | 0.1788 | 0.1073 | 0.1584 | 0.1583 | 19.0 |
| 0.1924 | 4.0 | 6540 | 0.1538 | 0.1789 | 0.1075 | 0.1585 | 0.1584 | 19.0 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
yumingyi/rl_course_vizdoom_health_gathering_supreme-2
|
yumingyi
| 2023-03-30T13:17:44Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-30T12:55:13Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 13.43 +/- 6.82
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r yumingyi/rl_course_vizdoom_health_gathering_supreme-2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme-2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme-2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
manuelmaiorano/Reinforce-pixelcopter
|
manuelmaiorano
| 2023-03-30T13:13:14Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-30T13:13:06Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 36.80 +/- 28.76
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
bingcheng45/autotrain-nlp-45198113367
|
bingcheng45
| 2023-03-30T13:06:01Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:bingcheng45/autotrain-data-nlp",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-30T13:01:51Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- bingcheng45/autotrain-data-nlp
co2_eq_emissions:
emissions: 1.8668016992060357
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 45198113367
- CO2 Emissions (in grams): 1.8668
## Validation Metrics
- Loss: 5.278
- Accuracy: 0.051
- Macro F1: 0.057
- Micro F1: 0.051
- Weighted F1: 0.044
- Macro Precision: 0.063
- Micro Precision: 0.051
- Weighted Precision: 0.049
- Macro Recall: 0.069
- Micro Recall: 0.051
- Weighted Recall: 0.051
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/bingcheng45/autotrain-nlp-45198113367
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("bingcheng45/autotrain-nlp-45198113367", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("bingcheng45/autotrain-nlp-45198113367", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
markeidsaune/q-Taxi-v3
|
markeidsaune
| 2023-03-30T13:04:29Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-30T13:04:26Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="markeidsaune/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
vocabtrimmer/mt5-small-squad-qa-trimmed-en-30000
|
vocabtrimmer
| 2023-03-30T12:56:41Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-30T12:38:43Z |
# Vocabulary Trimmed [lmqg/mt5-small-squad-qa](https://huggingface.co/lmqg/mt5-small-squad-qa): `vocabtrimmer/mt5-small-squad-qa-trimmed-en-30000`
This model is a trimmed version of [lmqg/mt5-small-squad-qa](https://huggingface.co/lmqg/mt5-small-squad-qa) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-squad-qa | vocabtrimmer/mt5-small-squad-qa-trimmed-en-30000 |
|:---------------------------|:--------------------------|:---------------------------------------------------|
| parameter_size_full | 300,165,504 | 74,784,128 |
| parameter_size_embedding | 256,103,424 | 30,722,048 |
| vocab_size | 250,101 | 30,002 |
| compression_rate_full | 100.0 | 24.91 |
| compression_rate_embedding | 100.0 | 12.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| en | vocabtrimmer/mc4_validation | text | en | validation | 30000 | 2 |
|
Horken/q_taxi_v2
|
Horken
| 2023-03-30T12:45:30Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-30T12:41:24Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q_taxi_v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Horken/q_taxi_v2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
vocabtrimmer/mt5-small-trimmed-en-enquad-qg
|
vocabtrimmer
| 2023-03-30T12:44:28Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question generation",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-30T12:41:23Z |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: en
datasets:
- lmqg/qg_squad
pipeline_tag: text2text-generation
tags:
- question generation
widget:
- text: "<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
example_title: "Question Generation Example 1"
- text: "Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records."
example_title: "Question Generation Example 2"
- text: "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ."
example_title: "Question Generation Example 3"
model-index:
- name: vocabtrimmer/mt5-small-trimmed-en-enquad-qg
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_squad
type: default
args: default
metrics:
- name: BLEU4 (Question Generation)
type: bleu4_question_generation
value: 21.84
- name: ROUGE-L (Question Generation)
type: rouge_l_question_generation
value: 49.16
- name: METEOR (Question Generation)
type: meteor_question_generation
value: 23.97
- name: BERTScore (Question Generation)
type: bertscore_question_generation
value: 90.06
- name: MoverScore (Question Generation)
type: moverscore_question_generation
value: 62.83
---
# Model Card of `vocabtrimmer/mt5-small-trimmed-en-enquad-qg`
This model is fine-tuned version of [ckpts/mt5-small-trimmed-en](https://huggingface.co/ckpts/mt5-small-trimmed-en) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [ckpts/mt5-small-trimmed-en](https://huggingface.co/ckpts/mt5-small-trimmed-en)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="vocabtrimmer/mt5-small-trimmed-en-enquad-qg")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "vocabtrimmer/mt5-small-trimmed-en-enquad-qg")
output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-en-enquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.06 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 54.15 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 37.79 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 28.32 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 21.84 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 23.97 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 62.83 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 49.16 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: paragraph_answer
- output_types: question
- prefix_types: None
- model: ckpts/mt5-small-trimmed-en
- max_length: 512
- max_length_output: 32
- epoch: 14
- batch: 16
- lr: 0.0005
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-en-enquad-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
dcinside1/AbyssSweetiepie
|
dcinside1
| 2023-03-30T12:43:25Z | 0 | 0 | null |
[
"license:gpl-3.0",
"region:us"
] | null | 2023-03-30T12:32:50Z |
---
license: gpl-3.0
---
copy of https://arca.live/b/aiart/72796744?mode=best&p=1
|
yumingyi/rl_course_vizdoom_health_gathering_supreme
|
yumingyi
| 2023-03-30T12:42:01Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-30T12:39:20Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.93 +/- 4.69
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r yumingyi/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Hgatsadrtasd/attempt2
|
Hgatsadrtasd
| 2023-03-30T12:23:02Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-03-30T09:10:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: attempt2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# attempt2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Inzamam567/Useless_Dalcefo
|
Inzamam567
| 2023-03-30T12:20:57Z | 0 | 3 | null |
[
"region:us"
] | null | 2023-03-30T12:20:57Z |
---
duplicated_from: AnaNoSleep/models_by_dalcefo
---
|
Horken/q_taxi
|
Horken
| 2023-03-30T12:16:09Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-30T12:15:14Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q_taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Horken/q_taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Saladchin/Reinforce-1
|
Saladchin
| 2023-03-30T12:14:20Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-30T12:14:06Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 452.50 +/- 142.50
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
saberzl/LunarLnder-v2
|
saberzl
| 2023-03-30T12:09:42Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-30T12:08:31Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.14 +/- 14.36
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
coreml-community/coreml-realisticVision-v20
|
coreml-community
| 2023-03-30T12:06:31Z | 0 | 21 | null |
[
"coreml",
"stable-diffusion",
"text-to-image",
"not-for-all-eyes",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-03-30T09:21:00Z |
---
license: creativeml-openrail-m
tags:
- coreml
- stable-diffusion
- text-to-image
- not-for-all-eyes
---
# Core ML Converted Model:
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).<br>
- Provide the model to an app such as **Mochi Diffusion** [Github](https://github.com/godly-devotion/MochiDiffusion) / [Discord](https://discord.gg/x2kartzxGv) to generate images.<br>
- `split_einsum` version is compatible with all compute unit options including Neural Engine.
- `original` version is only compatible with `CPU & GPU` option.
- Custom resolution versions are tagged accordingly.
- The `vae-ft-mse-840000-ema-pruned.ckpt` VAE is embedded into the model.
- This model was converted with a `vae-encoder` for use with `image2image`.
- This model is `fp16`.
- Descriptions are posted as-is from original model source.
- Not all features and/or results may be available in `CoreML` format.
- This model does not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
- This model does not include a `safety checker` (for NSFW content).
# realisticVision-v20:
Source(s): [Hugging Face](https://huggingface.co/SG161222/Realistic_Vision_V2.0) - [CivitAI](https://civitai.com/models/4201/realistic-vision-v20)
**Please read this!**
My model has always been free and always will be free. There are no restrictions on the use of the model. The rights to this model still belong to me.
This model is available on Mage.Space, Sinkin.ai, GetImg.ai and RandomSeed.co (NSFW content)
You can find out news about this model and future models, as well as support me on Boosty.
Recommended for use with [VAE](https://huggingface.co/stabilityai/sd-vae-ft-mse-original) which has already been baked into the converted `CoreML` model version here.
I use this template to get good generation results:
**Prompt**: RAW photo, subject, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3
**Example**: RAW photo, a close up portrait photo of 26 y.o woman in wastelander clothes, long haircut, pale skin, slim body, background is city ruins, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3
**Negative Prompt**: (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime:1.4), text, close up, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck
OR
(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation
`Euler A` or `DPM++ 2M Karras` with 25 steps
`CFG Scale` 7
`Hires Fix` with `Latent` upscaler
0 `Hires Steps` and `Denoising Strength` 0.25 - 0.45
`Upscaling` by 1.1 - 2.0 <br><br>




|
vocabtrimmer/mt5-small-squad-qa-trimmed-en-5000
|
vocabtrimmer
| 2023-03-30T11:59:41Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-30T11:40:13Z |
# Vocabulary Trimmed [lmqg/mt5-small-squad-qa](https://huggingface.co/lmqg/mt5-small-squad-qa): `vocabtrimmer/mt5-small-squad-qa-trimmed-en-5000`
This model is a trimmed version of [lmqg/mt5-small-squad-qa](https://huggingface.co/lmqg/mt5-small-squad-qa) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-squad-qa | vocabtrimmer/mt5-small-squad-qa-trimmed-en-5000 |
|:---------------------------|:--------------------------|:--------------------------------------------------|
| parameter_size_full | 300,165,504 | 49,184,128 |
| parameter_size_embedding | 256,103,424 | 5,122,048 |
| vocab_size | 250,101 | 5,002 |
| compression_rate_full | 100.0 | 16.39 |
| compression_rate_embedding | 100.0 | 2.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| en | vocabtrimmer/mc4_validation | text | en | validation | 5000 | 2 |
|
0-hero/flan-alpaca-ul2
|
0-hero
| 2023-03-30T11:59:23Z | 4 | 4 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"dataset:tatsu-lab/alpaca",
"arxiv:2210.11416",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-30T11:11:03Z |
---
license: apache-2.0
datasets:
- tatsu-lab/alpaca
---
## 🍮 🦙 Flan-Alpaca: Instruction Tuning from Humans and Machines
Thanks to [declare-lab](https://huggingface.co/declare-lab) for the training [repository](https://github.com/declare-lab/flan-alpaca), contains code for extending the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
synthetic instruction tuning to existing instruction-tuned models such as [Flan-T5](https://arxiv.org/abs/2210.11416).
The pretrained models and demos are available on HuggingFace 🤗 :
| Model | Parameters | Training GPUs |
|---------------------------------------------------------------------------|------------|-----------------|
| [Flan-Alpaca-Base](https://huggingface.co/declare-lab/flan-alpaca-base) | 220M | 1x A6000 |
| [Flan-Alpaca-Large](https://huggingface.co/declare-lab/flan-alpaca-large) | 770M | 1x A6000 |
| [Flan-Alpaca-XL](https://huggingface.co/declare-lab/flan-alpaca-xl) | 3B | 1x A6000 |
| [Flan-Alpaca-XXL](https://huggingface.co/declare-lab/flan-alpaca-xxl) | 11B | 4x A6000 (FSDP) |
| [Flan-Alpaca-UL2](https://huggingface.co/0-hero/flan-alpaca-ul2) | 20B | 4x A100 (80G) (FSDP) |
### Why?
[Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html) represents an exciting new direction
to approximate the performance of large language models (LLMs) like ChatGPT cheaply and easily.
Concretely, they leverage an LLM such as GPT-3 to generate instructions as synthetic training data.
The synthetic data which covers more than 50k tasks can then be used to finetune a smaller model.
However, the original implementation is less accessible due to licensing constraints of the
underlying [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) model.
Furthermore, users have noted [potential noise](https://github.com/tloen/alpaca-lora/issues/65) in the synthetic
dataset. Hence, it may be better to explore a fully accessible model that is already trained on high-quality (but
less diverse) instructions such as [Flan-T5](https://arxiv.org/abs/2210.11416).
### Usage
```
from transformers import pipeline
prompt = "Write an email about an alpaca that likes flan"
model = pipeline(model="0-hero/flan-alpaca-ul2")
model(prompt, max_length=128, do_sample=True)
```
Readme forked from declare-lab/flan-alpaca-xxl
|
EvaOr/DeepRL_chp5_MLAgents_SnowballTarget
|
EvaOr
| 2023-03-30T11:54:19Z | 29 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-03-30T11:53:34Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: EvaOr/DeepRL_chp5_MLAgents_SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ZhihongDeng/ppo-LunarLander-v2
|
ZhihongDeng
| 2023-03-30T11:45:11Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-02-16T10:03:34Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -124.48 +/- 47.90
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'ZhihongDeng/ppo-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
vocabtrimmer/mbart-large-cc25-itquad-qa-trimmed-it-30000
|
vocabtrimmer
| 2023-03-30T11:38:33Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-30T11:11:00Z |
# Vocabulary Trimmed [lmqg/mbart-large-cc25-itquad-qa](https://huggingface.co/lmqg/mbart-large-cc25-itquad-qa): `vocabtrimmer/mbart-large-cc25-itquad-qa-trimmed-it-30000`
This model is a trimmed version of [lmqg/mbart-large-cc25-itquad-qa](https://huggingface.co/lmqg/mbart-large-cc25-itquad-qa) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mbart-large-cc25-itquad-qa | vocabtrimmer/mbart-large-cc25-itquad-qa-trimmed-it-30000 |
|:---------------------------|:----------------------------------|:-----------------------------------------------------------|
| parameter_size_full | 610,852,864 | 385,548,288 |
| parameter_size_embedding | 512,057,344 | 61,448,192 |
| vocab_size | 250,028 | 30,004 |
| compression_rate_full | 100.0 | 63.12 |
| compression_rate_embedding | 100.0 | 12.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| it | vocabtrimmer/mc4_validation | text | it | validation | 30000 | 2 |
|
Angel-IG/distilgpt2-finetuned-mecanicos
|
Angel-IG
| 2023-03-30T11:32:21Z | 196 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-03-30T11:19:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-mecanicos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-mecanicos
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6138
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8441 | 1.0 | 873 | 1.6876 |
| 1.5373 | 2.0 | 1746 | 1.6241 |
| 1.5216 | 3.0 | 2619 | 1.6138 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
DrishtiSharma/Reinforce-PixelCopter-1L
|
DrishtiSharma
| 2023-03-30T11:24:25Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-30T11:10:13Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter-1.5L
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 41.90 +/- 16.89
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0**.
|
abush6352/sd-class-butterflies-32
|
abush6352
| 2023-03-30T11:12:23Z | 30 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-03-30T11:12:06Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('abush6352/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
vocabtrimmer/mt5-small-squad-qg-trimmed-en-120000
|
vocabtrimmer
| 2023-03-30T11:09:01Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-30T10:50:41Z |
# Vocabulary Trimmed [lmqg/mt5-small-squad-qg](https://huggingface.co/lmqg/mt5-small-squad-qg): `vocabtrimmer/mt5-small-squad-qg-trimmed-en-120000`
This model is a trimmed version of [lmqg/mt5-small-squad-qg](https://huggingface.co/lmqg/mt5-small-squad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-squad-qg | vocabtrimmer/mt5-small-squad-qg-trimmed-en-120000 |
|:---------------------------|:--------------------------|:----------------------------------------------------|
| parameter_size_full | 300,165,504 | 166,944,128 |
| parameter_size_embedding | 256,103,424 | 122,882,048 |
| vocab_size | 250,101 | 120,002 |
| compression_rate_full | 100.0 | 55.62 |
| compression_rate_embedding | 100.0 | 47.98 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| en | vocabtrimmer/mc4_validation | text | en | validation | 120000 | 2 |
|
Ganu3010/Reinforce-Cartpole-v1
|
Ganu3010
| 2023-03-30T10:58:33Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-30T10:58:23Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
cfalholt/A2C-PandaReachDense-v2
|
cfalholt
| 2023-03-30T10:51:34Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-09T12:17:48Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.65 +/- 0.28
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sindri101/medical_chat-en-zh
|
sindri101
| 2023-03-30T10:51:21Z | 119 | 9 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain",
"translation",
"unk",
"dataset:zhaozh/autotrain-data-chatdoctor-reft-en-zh",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-03-30T10:45:31Z |
---
tags:
- autotrain
- translation
language:
- unk
- unk
datasets:
- zhaozh/autotrain-data-chatdoctor-reft-en-zh
co2_eq_emissions:
emissions: 2.240193635056679
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 45173113346
- CO2 Emissions (in grams): 2.2402
## Validation Metrics
- Loss: 1.636
- SacreBLEU: 29.513
- Gen len: 176.613
|
koya1/videomae-base-finetuned-ucf101-subset
|
koya1
| 2023-03-30T10:50:17Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-02-24T05:10:29Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8124
- Accuracy: 0.8324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3990
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1012 | 0.1 | 398 | 1.9809 | 0.38 |
| 1.0416 | 1.1 | 796 | 1.6140 | 0.56 |
| 0.2096 | 2.1 | 1194 | 1.5776 | 0.66 |
| 0.7101 | 3.1 | 1592 | 1.2004 | 0.74 |
| 1.2344 | 4.1 | 1990 | 1.9621 | 0.58 |
| 0.1809 | 5.1 | 2388 | 1.6322 | 0.71 |
| 0.0011 | 6.1 | 2786 | 1.8266 | 0.71 |
| 0.0951 | 7.1 | 3184 | 1.5910 | 0.78 |
| 0.4047 | 8.1 | 3582 | 1.9999 | 0.7 |
| 0.0011 | 9.1 | 3980 | 1.5903 | 0.78 |
| 0.001 | 10.0 | 3990 | 1.5903 | 0.78 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
yumingyi/lunarlander-v2-unit8-2
|
yumingyi
| 2023-03-30T10:38:48Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-30T10:38:04Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 62.22 +/- 38.23
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 1000000
'learning_rate': 0.0006
'num_envs': 64
'num_steps': 1024
'anneal_lr': True
'gae': True
'gamma': 0.98
'gae_lambda': 0.98
'num_minibatches': 64
'update_epochs': 64
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'yumingyi/lunarlander-v2-unit8-2'
'batch_size': 65536
'minibatch_size': 1024}
```
|
vocabtrimmer/mt5-small-squad-qg-trimmed-en-60000
|
vocabtrimmer
| 2023-03-30T10:20:18Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-30T10:03:40Z |
# Vocabulary Trimmed [lmqg/mt5-small-squad-qg](https://huggingface.co/lmqg/mt5-small-squad-qg): `vocabtrimmer/mt5-small-squad-qg-trimmed-en-60000`
This model is a trimmed version of [lmqg/mt5-small-squad-qg](https://huggingface.co/lmqg/mt5-small-squad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-squad-qg | vocabtrimmer/mt5-small-squad-qg-trimmed-en-60000 |
|:---------------------------|:--------------------------|:---------------------------------------------------|
| parameter_size_full | 300,165,504 | 105,504,128 |
| parameter_size_embedding | 256,103,424 | 61,442,048 |
| vocab_size | 250,101 | 60,002 |
| compression_rate_full | 100.0 | 35.15 |
| compression_rate_embedding | 100.0 | 23.99 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| en | vocabtrimmer/mc4_validation | text | en | validation | 60000 | 2 |
|
danstinga/my_awesome_wnut_model
|
danstinga
| 2023-03-30T10:16:39Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-03-30T09:32:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_awesome_wnut_model
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: test
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.56
- name: Recall
type: recall
value: 0.28544949026876737
- name: F1
type: f1
value: 0.378146101903008
- name: Accuracy
type: accuracy
value: 0.9407464409388226
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2754
- Precision: 0.56
- Recall: 0.2854
- F1: 0.3781
- Accuracy: 0.9407
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.2826 | 0.5246 | 0.2475 | 0.3363 | 0.9384 |
| No log | 2.0 | 426 | 0.2754 | 0.56 | 0.2854 | 0.3781 | 0.9407 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
zirui3/flan-t5-large-alpaca
|
zirui3
| 2023-03-30T09:45:45Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-03-30T09:33:47Z |
# Model summary
Train flan-T5-large on alpaca dataset with LoRA
# training
* torch==2.0.0+cu117
* transformers==4.28.0.dev0
* 8 x V100 32G
# How to use
```python
import transformers
from peft import PeftModel
base_model = transformers.AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-large")
peft_model = PeftModel.from_pretrained("zirui3/flan-t5-large-alpaca")
inputs = tokenizer("Any instruction that you like.", return_tensors="pt")
outputs = peft_model.generate(**inputs, max_length=128, do_sample=True)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True)
```
|
DoctorRobotnik/ppo-CartPole-v1
|
DoctorRobotnik
| 2023-03-30T09:44:51Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-30T09:44:41Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -127.66 +/- 43.37
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'DoctorRobotnik/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
vocabtrimmer/mt5-small-trimmed-en-60000-squad-qg
|
vocabtrimmer
| 2023-03-30T09:43:32Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question generation",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-30T09:42:11Z |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: en
datasets:
- lmqg/qg_squad
pipeline_tag: text2text-generation
tags:
- question generation
widget:
- text: "<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
example_title: "Question Generation Example 1"
- text: "Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records."
example_title: "Question Generation Example 2"
- text: "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ."
example_title: "Question Generation Example 3"
model-index:
- name: vocabtrimmer/mt5-small-trimmed-en-60000-squad-qg
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_squad
type: default
args: default
metrics:
- name: BLEU4 (Question Generation)
type: bleu4_question_generation
value: 22.2
- name: ROUGE-L (Question Generation)
type: rouge_l_question_generation
value: 49.3
- name: METEOR (Question Generation)
type: meteor_question_generation
value: 24.16
- name: BERTScore (Question Generation)
type: bertscore_question_generation
value: 90.05
- name: MoverScore (Question Generation)
type: moverscore_question_generation
value: 62.89
---
# Model Card of `vocabtrimmer/mt5-small-trimmed-en-60000-squad-qg`
This model is fine-tuned version of [ckpts/mt5-small-trimmed-en-60000](https://huggingface.co/ckpts/mt5-small-trimmed-en-60000) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [ckpts/mt5-small-trimmed-en-60000](https://huggingface.co/ckpts/mt5-small-trimmed-en-60000)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="vocabtrimmer/mt5-small-trimmed-en-60000-squad-qg")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "vocabtrimmer/mt5-small-trimmed-en-60000-squad-qg")
output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-en-60000-squad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.05 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 54.46 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 38.13 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 28.68 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 22.2 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 24.16 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 62.89 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 49.3 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: paragraph_answer
- output_types: question
- prefix_types: None
- model: ckpts/mt5-small-trimmed-en-60000
- max_length: 512
- max_length_output: 32
- epoch: 16
- batch: 16
- lr: 0.0005
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-en-60000-squad-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
manuelmaiorano/Reinforce-Cartpole
|
manuelmaiorano
| 2023-03-30T09:41:03Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-30T09:40:49Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 492.20 +/- 23.40
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
vocabtrimmer/mt5-small-squad-qg-trimmed-en-15000
|
vocabtrimmer
| 2023-03-30T09:40:56Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-30T09:24:45Z |
# Vocabulary Trimmed [lmqg/mt5-small-squad-qg](https://huggingface.co/lmqg/mt5-small-squad-qg): `vocabtrimmer/mt5-small-squad-qg-trimmed-en-15000`
This model is a trimmed version of [lmqg/mt5-small-squad-qg](https://huggingface.co/lmqg/mt5-small-squad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-squad-qg | vocabtrimmer/mt5-small-squad-qg-trimmed-en-15000 |
|:---------------------------|:--------------------------|:---------------------------------------------------|
| parameter_size_full | 300,165,504 | 59,424,128 |
| parameter_size_embedding | 256,103,424 | 15,362,048 |
| vocab_size | 250,101 | 15,002 |
| compression_rate_full | 100.0 | 19.8 |
| compression_rate_embedding | 100.0 | 6.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| en | vocabtrimmer/mc4_validation | text | en | validation | 15000 | 2 |
|
mrsteyk/rupgt-chatml-tokenizer
|
mrsteyk
| 2023-03-30T09:39:17Z | 0 | 0 |
transformers
|
[
"transformers",
"chatml",
"ru",
"endpoints_compatible",
"region:us"
] | null | 2023-03-30T09:36:57Z |
---
language:
- ru
library_name: transformers
tags:
- chatml
---
|
doluvor/donut-base-doluvor
|
doluvor
| 2023-03-30T09:38:29Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-03-29T10:10:59Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-doluvor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-doluvor
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 2
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.0
- Datasets 2.9.0
- Tokenizers 0.13.2
|
marimurta/a2c-AntBulletEnv-v0-m
|
marimurta
| 2023-03-30T09:34:45Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-30T09:33:13Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1127.86 +/- 338.12
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
andyleeyuan/RacyTest
|
andyleeyuan
| 2023-03-30T09:34:32Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-20T03:24:48Z |
---
license: creativeml-openrail-m
---
|
heziyevv/pyramids
|
heziyevv
| 2023-03-30T09:31:02Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-03-30T09:30:18Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: heziyevv/pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Shivraj8615/ppo-Pyramids
|
Shivraj8615
| 2023-03-30T09:29:51Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-03-30T09:29:45Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: Shivraj8615/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
vocabtrimmer/mt5-small-squad-qg-trimmed-en-10000
|
vocabtrimmer
| 2023-03-30T09:23:17Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-30T09:06:48Z |
# Vocabulary Trimmed [lmqg/mt5-small-squad-qg](https://huggingface.co/lmqg/mt5-small-squad-qg): `vocabtrimmer/mt5-small-squad-qg-trimmed-en-10000`
This model is a trimmed version of [lmqg/mt5-small-squad-qg](https://huggingface.co/lmqg/mt5-small-squad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-squad-qg | vocabtrimmer/mt5-small-squad-qg-trimmed-en-10000 |
|:---------------------------|:--------------------------|:---------------------------------------------------|
| parameter_size_full | 300,165,504 | 54,304,128 |
| parameter_size_embedding | 256,103,424 | 10,242,048 |
| vocab_size | 250,101 | 10,002 |
| compression_rate_full | 100.0 | 18.09 |
| compression_rate_embedding | 100.0 | 4.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| en | vocabtrimmer/mc4_validation | text | en | validation | 10000 | 2 |
|
vorstcavry/ddosmixx
|
vorstcavry
| 2023-03-30T09:17:23Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-30T09:17:23Z |
---
license: creativeml-openrail-m
---
|
AryaParikh/autotrain-text_summary_arp-45146113306
|
AryaParikh
| 2023-03-30T09:07:17Z | 108 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain",
"summarization",
"en",
"dataset:Hinataaa/autotrain-data-text_summary_arp",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-03-30T08:57:50Z |
---
tags:
- autotrain
- summarization
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Hinataaa/autotrain-data-text_summary_arp
co2_eq_emissions:
emissions: 3.673615303025701
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 45146113306
- CO2 Emissions (in grams): 3.6736
## Validation Metrics
- Loss: 1.492
- Rouge1: 49.267
- Rouge2: 26.900
- RougeL: 46.736
- RougeLsum: 46.679
- Gen Len: 18.636
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Hinataaa/autotrain-text_summary_arp-45146113306
```
|
vocabtrimmer/mt5-small-squad-qg-trimmed-en-5000
|
vocabtrimmer
| 2023-03-30T09:05:36Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-30T08:46:49Z |
# Vocabulary Trimmed [lmqg/mt5-small-squad-qg](https://huggingface.co/lmqg/mt5-small-squad-qg): `vocabtrimmer/mt5-small-squad-qg-trimmed-en-5000`
This model is a trimmed version of [lmqg/mt5-small-squad-qg](https://huggingface.co/lmqg/mt5-small-squad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-squad-qg | vocabtrimmer/mt5-small-squad-qg-trimmed-en-5000 |
|:---------------------------|:--------------------------|:--------------------------------------------------|
| parameter_size_full | 300,165,504 | 49,184,128 |
| parameter_size_embedding | 256,103,424 | 5,122,048 |
| vocab_size | 250,101 | 5,002 |
| compression_rate_full | 100.0 | 16.39 |
| compression_rate_embedding | 100.0 | 2.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| en | vocabtrimmer/mc4_validation | text | en | validation | 5000 | 2 |
|
andyleeyuan/RacyMixV1
|
andyleeyuan
| 2023-03-30T09:03:57Z | 0 | 13 | null |
[
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-03-04T14:26:56Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
# RacyMixV1
Merge by Weighted Sum <strong>*PastelMix 0.6*</strong> + <strong>*RacyV1 0.4*</strong>(I forgot the recipe)
vae:Recommend <strong>kl-f8-anime2</strong>(https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/vae/kl-f8-anime2.ckpt)
Negative prompt: <strong>EasyNegative</strong>(https://huggingface.co/datasets/gsdf/EasyNegative)
The generation of hands may be slightly unstable, please adjust the negative prompt yourself
If no specific background is specified, there is a high probability of generating a city or a supermarket.
# Examples
```
1girl, sarong bikini nail polish skindentation,cowboy shot, beach, sunlight, blue sky,
Negative prompt: EasyNegative,
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 3550464031, Size: 512x768, Denoising strength: 0.55, Clip skip: 2, ENSD: 31337, Hires upscale: 1.5, Hires upscaler: Latent (nearest-exact)
```
<img src="https://i.imgur.com/DeavyG1.png" width="512" height="768">
<br>
```
((perfect details, highres, ultra-detailed, illustration)),
Hindu mythology, Chandra, deity, male, serene expression, crescent moon on forehead, white complexion, four arms, holding conch shell and discus, lotus flower, cosmic background, stars, peaceful
Negative prompt: EasyNegative,
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 3352669632, Size: 512x768, Denoising strength: 0.55, Clip skip: 2, ENSD: 31337, Hires upscale: 1.5, Hires upscaler: Latent (nearest-exact)
```
<img src="https://i.imgur.com/PFzyRrp.png" width="512" height="768">
<br>
```
profile,charter Layout,full body,stand at attention,look at viewer,put down hands,fox girl,fancy clothes,detail clothes,white background,
Negative prompt: (low quality, worst quality:1.4),(EasyNegative:1.4),(3 legs:1.3),(NG_DeepNegative_V1_75T:1.3), (painting by bad-artist:1.3), (negprompt5:1.2), (bad-image-v2-39000:1.3),
lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts))
Steps: 25, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 1954153806, Size: 512x768, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B
```
<img src="https://i.imgur.com/Jdc2VQY.png" width="512" height="768">
<br>
|
heziyevv/ppo-SnowballTarget
|
heziyevv
| 2023-03-30T08:57:42Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-03-30T08:57:35Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: heziyevv/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
cper/chat
|
cper
| 2023-03-30T08:54:23Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-03-30T08:54:23Z |
---
license: cc-by-nc-sa-4.0
---
|
research-backup/mbart-large-cc25-itquad-qa
|
research-backup
| 2023-03-30T08:47:50Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"question answering",
"it",
"dataset:lmqg/qg_itquad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-30T08:41:16Z |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: it
datasets:
- lmqg/qg_itquad
pipeline_tag: text2text-generation
tags:
- question answering
widget:
- text: "question: Quale batterio ha il nome del paese che colpisce di più nel suo nome?, context: Il complesso M. tubercolosi (MTBC) comprende altri quattro micobatteri causa di tubercolosi: M. bovis, M. africanum, M. canetti e M. microti. M. africanum non è molto diffuso, ma è una causa significativa di tubercolosi in alcune parti dell' Africa. M. bovis era una volta una causa comune della tubercolosi, ma l' introduzione del latte pastorizzato ha quasi completamente eliminato questo problema di salute pubblica nei paesi sviluppati. M. canetti è raro e sembra essere limitato al Corno d' Africa, anche se alcuni casi sono stati osservati negli emigranti africani. M. microti è anche raro ed è visto quasi solo in persone immunodeficienti, anche se la sua prevalenza può essere significativamente sottovalutata."
example_title: "Question Answering Example 1"
model-index:
- name: lmqg/mbart-large-cc25-itquad-qa
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_itquad
type: default
args: default
metrics:
- name: BLEU4 (Question Answering)
type: bleu4_question_answering
value: 19.64
- name: ROUGE-L (Question Answering)
type: rouge_l_question_answering
value: 37.59
- name: METEOR (Question Answering)
type: meteor_question_answering
value: 33.6
- name: BERTScore (Question Answering)
type: bertscore_question_answering
value: 93.12
- name: MoverScore (Question Answering)
type: moverscore_question_answering
value: 80.49
- name: AnswerF1Score (Question Answering)
type: answer_f1_score__question_answering
value: 64.73
- name: AnswerExactMatch (Question Answering)
type: answer_exact_match_question_answering
value: 50.01
---
# Model Card of `lmqg/mbart-large-cc25-itquad-qa`
This model is fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) for question answering task on the [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25)
- **Language:** it
- **Training data:** [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="it", model="lmqg/mbart-large-cc25-itquad-qa")
# model prediction
answers = model.answer_q(list_question="Quale batterio ha il nome del paese che colpisce di più nel suo nome?", list_context=" Il complesso M. tubercolosi (MTBC) comprende altri quattro micobatteri causa di tubercolosi: M. bovis, M. africanum, M. canetti e M. microti. M. africanum non è molto diffuso, ma è una causa significativa di tubercolosi in alcune parti dell' Africa. M. bovis era una volta una causa comune della tubercolosi, ma l' introduzione del latte pastorizzato ha quasi completamente eliminato questo problema di salute pubblica nei paesi sviluppati. M. canetti è raro e sembra essere limitato al Corno d' Africa, anche se alcuni casi sono stati osservati negli emigranti africani. M. microti è anche raro ed è visto quasi solo in persone immunodeficienti, anche se la sua prevalenza può essere significativamente sottovalutata.")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/mbart-large-cc25-itquad-qa")
output = pipe("question: Quale batterio ha il nome del paese che colpisce di più nel suo nome?, context: Il complesso M. tubercolosi (MTBC) comprende altri quattro micobatteri causa di tubercolosi: M. bovis, M. africanum, M. canetti e M. microti. M. africanum non è molto diffuso, ma è una causa significativa di tubercolosi in alcune parti dell' Africa. M. bovis era una volta una causa comune della tubercolosi, ma l' introduzione del latte pastorizzato ha quasi completamente eliminato questo problema di salute pubblica nei paesi sviluppati. M. canetti è raro e sembra essere limitato al Corno d' Africa, anche se alcuni casi sono stati osservati negli emigranti africani. M. microti è anche raro ed è visto quasi solo in persone immunodeficienti, anche se la sua prevalenza può essere significativamente sottovalutata.")
```
## Evaluation
- ***Metric (Question Answering)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-itquad-qa/raw/main/eval/metric.first.answer.paragraph_question.answer.lmqg_qg_itquad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
| AnswerExactMatch | 50.01 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| AnswerF1Score | 64.73 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| BERTScore | 93.12 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| Bleu_1 | 32.51 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| Bleu_2 | 26.71 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| Bleu_3 | 22.92 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| Bleu_4 | 19.64 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| METEOR | 33.6 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| MoverScore | 80.49 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| ROUGE_L | 37.59 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_itquad
- dataset_name: default
- input_types: ['paragraph_question']
- output_types: ['answer']
- prefix_types: None
- model: facebook/mbart-large-cc25
- max_length: 512
- max_length_output: 32
- epoch: 16
- batch: 4
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 16
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-itquad-qa/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
FBM/rl_course_vizdoom_health_gathering_supreme
|
FBM
| 2023-03-30T08:40:29Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-30T08:40:13Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 15.02 +/- 5.35
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r FBM/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
lmqg/mt5-small-squad-qa
|
lmqg
| 2023-03-30T08:40:01Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question answering",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-30T08:31:25Z |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: en
datasets:
- lmqg/qg_squad
pipeline_tag: text2text-generation
tags:
- question answering
widget:
- text: "question: What is a person called is practicing heresy?, context: Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things."
example_title: "Question Answering Example 1"
- text: "question: who created the post as we know it today?, context: 'So much of The Post is Ben,' Mrs. Graham said in 1994, three years after Bradlee retired as editor. 'He created it as we know it today.'— Ed O'Keefe (@edatpost) October 21, 2014"
example_title: "Question Answering Example 2"
model-index:
- name: lmqg/mt5-small-squad-qa
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_squad
type: default
args: default
metrics:
- name: BLEU4 (Question Answering)
type: bleu4_question_answering
value: 38.98
- name: ROUGE-L (Question Answering)
type: rouge_l_question_answering
value: 68.71
- name: METEOR (Question Answering)
type: meteor_question_answering
value: 39.9
- name: BERTScore (Question Answering)
type: bertscore_question_answering
value: 92.09
- name: MoverScore (Question Answering)
type: moverscore_question_answering
value: 82.04
- name: AnswerF1Score (Question Answering)
type: answer_f1_score__question_answering
value: 70.14
- name: AnswerExactMatch (Question Answering)
type: answer_exact_match_question_answering
value: 55.51
---
# Model Card of `lmqg/mt5-small-squad-qa`
This model is fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) for question answering task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [google/mt5-small](https://huggingface.co/google/mt5-small)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="lmqg/mt5-small-squad-qa")
# model prediction
answers = model.answer_q(list_question="What is a person called is practicing heresy?", list_context=" Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/mt5-small-squad-qa")
output = pipe("question: What is a person called is practicing heresy?, context: Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.")
```
## Evaluation
- ***Metric (Question Answering)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-squad-qa/raw/main/eval/metric.first.answer.paragraph_question.answer.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:---------------------------------------------------------------|
| AnswerExactMatch | 55.51 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| AnswerF1Score | 70.14 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| BERTScore | 92.09 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 54.66 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 48.72 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 43.42 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 38.98 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 39.9 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 82.04 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 68.71 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_question']
- output_types: ['answer']
- prefix_types: None
- model: google/mt5-small
- max_length: 512
- max_length_output: 32
- epoch: 11
- batch: 16
- lr: 0.0005
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-small-squad-qa/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
trainMeWell/seg_distil_manyCorpus_undersampled-3-1_5sentsContext_3epoch_class-weight
|
trainMeWell
| 2023-03-30T08:39:33Z | 179 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-30T07:28:16Z |
valDataset consists of GBH / NPR / DemocracyNow!
[[ 1284 692]
[ 7539 28555]]
0.6497975708502024
0.791128719454757
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.