modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-01 06:29:04
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 530
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-01 06:28:51
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
LarryAIDraw/kissShotAcerolaOrionHeartUnder_1
|
LarryAIDraw
| 2023-03-11T15:53:40Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-11T15:51:52Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/18255/kiss-shot-acerola-orion-heart-under-blade-oshino-shinobu-adult-monogatari-series-lora
|
Kentris/doom_health_gathering_supreme
|
Kentris
| 2023-03-11T15:48:25Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"deep-reinforcement-learning",
"reinforcement-learning",
"doom_health_gathering_supreme",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-11T15:20:21Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
- doom_health_gathering_supreme
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.56 +/- 5.02
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r atorre/SampleFactory-ppo-doom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=SampleFactory-ppo-doom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=SampleFactory-ppo-doom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
faalbane/kopper-kreations-custom-stable-diffusion-style-v-1-5-21-photos-vv-1
|
faalbane
| 2023-03-11T15:36:27Z | 37 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-11T15:35:29Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: kk
---
### Kopper Kreations Custom Stable Diffusion (style) (v. 1.5) (21 photos) vv.1 Dreambooth model trained by faalbane with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
kk (use that on your prompt)

|
maayansharon/climate_text_classification_mini_model
|
maayansharon
| 2023-03-11T15:28:52Z | 107 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-09T23:39:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
- f1
model-index:
- name: climate_text_classification_mini_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# climate_text_classification_mini_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [climate-tagging-labelled-datasets](https://huggingface.co/datasets/maayansharon/climate-tagging-labelled-datasets) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7516
- Precision: 0.7941
- Recall: 0.9643
- Accuracy: 0.8
- F1: {'f1': 0.8709677419354839}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:--------:|:--------------------------:|
| No log | 1.0 | 10 | 0.5469 | 0.875 | 0.75 | 0.75 | {'f1': 0.8076923076923077} |
| No log | 2.0 | 20 | 0.7516 | 0.7941 | 0.9643 | 0.8 | {'f1': 0.8709677419354839} |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.7.0a0
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ai-guru/lakhclean_mmmtrack_4bars_d-2048
|
ai-guru
| 2023-03-11T15:21:39Z | 193 | 35 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"music-modeling",
"music-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-04-17T17:24:51Z |
---
tags:
- gpt2
- text-generation
- music-modeling
- music-generation
widget:
- text: PIECE_START
- text: PIECE_START PIECE_START TRACK_START INST=34 DENSITY=8
- text: PIECE_START TRACK_START INST=1
---
# GPT-2 for Music
Language Models such as GPT-2 can be used for Music Generation. The idea is to represent pieces of music as texts, effectively reducing the task to Language Generation.
This model is a rather small instance of GPT-2 trained the [Lakhclean dataset](https://colinraffel.com/projects/lmd/). The model generates 4 bars at a time at a 16th note resolution with 4/4 meter.
If you want to contribute, if you want to say hello, if you want to know more, find me here:
- https://www.linkedin.com/in/dr-tristan-behrens-734967a2/
- https://www.youtube.com/@drtristanbehrens
- https://twitter.com/DrTBehrens
- https://github.com/AI-Guru
- https://huggingface.co/TristanBehrens
- https://huggingface.co/ai-guru
Run the model on Google Colab: https://colab.research.google.com/drive/1Mz-KJ8vX4Wylr4mzvgP-MclDwQJ06KSq?usp=sharing
## License
You are free to use this model in any open-source context without charge. If you do so, please credit me.
However, if you wish to use the model for commercial purposes, please contact me to discuss licensing terms. Depending on the specific use case, there may be fees associated with commercial use. I am open to negotiating the terms of the license to meet your needs and ensure that the model is used appropriately. Please feel free to reach out to me at your earliest convenience to discuss further.
## Model description
The model is GPT-2 with 6 decoders and 8 attention heads each. The context length is 2048. The embedding dimensions are 512.
## Model family
This model is part of a huge group of Transformers I have trained. Most of them are not publicly available.
If you are interested in using andor licensing one of the models, please get in touch.
### Lakhclean
These models were trained on roundabout 15K MIDI files (the same as the model you are viewing now) from the Lakhclean dataset.
- lakhclean_mmmbar_4bars_d-2048: 4 bars resolution, bar inpainting, note density conditioning
- lakhclean_mmmbar_8bars_d-2048: 8 bars resolution, bar inpainting, note density conditioning
- lakhclean_mmmtrack_4bars_chords: 4 bars resolution, chord conditioning
- lakhclean_mmmtrack_4bars_d-2048: 4 bars resolution, note density conditioning (this model)
- lakhclean_mmmtrack_4bars_simple-2048: 4 bars resolution
- lakhclean_mmmtrack_8bars_d-2048: 8 bars resolution, note density conditioning
### Lakhfull
These models were trained on roundabout 175K MIDI files from the Lakh dataset.
- lakhfull_mmmtrack_4bars_d-2048: 4 bars resolution, note density conditioning (the big brother of this model)
- lakhfull_mmmtrack_4bars_simple-2048: 4 bars resolution
### Metal
These models were trained on roundabout 7K MIDI files from my own collections. They contain genre conditioning.
- metal_mmmbar_4bars_d-2048: 4 bars resolution, bar inpainting, note density conditioning
- metal_mmmbar_8bars_d-2048: 8 bars resolution, bar inpainting, note density conditioning
- metal_mmmtrack_4bars_d-2048: 4 bars resolution, note density conditioning
- metal_mmmtrack_8bars_d-2048: 8 bars resolution, note density conditioning
### MetaMIDI Dataset genres
These models were trained on genre-specific subsets of the MetaMIDI dataset.
- mmd-baroque_mmmtrack_4bars_d-2048: 4 bars resolution, note density conditioning
- mmd-baroque_mmmtrack_8bars_d-2048: 8 bars resolution, note density conditioning
- mmd-classical_mmmtrack_8bars_d-2048: 8 bars resolution, note density conditioning
- mmd-noncontemporary_mmmtrack_8bars_d-2048: 8 bars resolution, note density conditioning
- mmd-pop_mmmtrack_8bars_d-2048: 8 bars resolution, note density conditioning
- mmd-renaissance_mmmtrack_8bars_d-2048: 8 bars resolution, note density conditioning
### MetaMIDI Dataset full
These models were trained on roundabout 400K MIDI files from the MetaMIDI dataset.
- mmd-full_mmmtrack_4bars_d-2048: 4 bars resolution, note density conditioning
- mmd-full_mmmtrack_8bars_d-2048: 8 bars resolution, note density conditioning
- mmd-full_mmmtrack_4bars_chords-d-2048: 4 bars resolution, note density conditioning, chord conditioning (most powerful model in the entire group)
## Intended uses & limitations
This model is just a proof of concept. It shows that HuggingFace can be used to compose music.
### How to use
There is a notebook in the repo that you can use to generate symbolic music and then render it.
### Limitations and bias
Since this model has been trained on a very small corpus of music, it is overfitting heavily.
### Acknowledgements
This model has been created with support from NVIDIA. I am very grateful for the GPU compute they provided!
|
nsecord/Reinforce-Pixelcopter-PLE-v0-2
|
nsecord
| 2023-03-11T15:07:07Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-11T15:06:40Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0-2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 42.93 +/- 36.84
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Squirz/BlueFish
|
Squirz
| 2023-03-11T15:06:27Z | 0 | 2 | null |
[
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-01-20T17:34:37Z |
---
license: creativeml-openrail-m
pipeline_tag: text-to-image
---
Just Bunch of merged models. Don't actually remember formulas. Majority are anime related.
BlueFish V1.5 I dislike the most,maybe because of the prompting. It is supposed to get more semi-realistic look. For me it's too plastic.
Added comparisons between V1,1.5 and V2
Potenial Other Models that I used to merge those:
https://huggingface.co/Joeythemonster/anything-midjourney-v-4-1
https://huggingface.co/AdamOswald1/Anything-Preservation
https://huggingface.co/darkstorm2150/Protogen_v2.2_Official_Release
https://huggingface.co/a1079602570/animefull-final-pruned
https://huggingface.co/hesw23168/SD-Elysium-Model 2nd ver
https://huggingface.co/OrangeMix/AbyssOrangeMix2
https://huggingface.co/WarriorMama777/OrangeMixs


|
maryyy1881/dormitory_girlish_tree_book_fun
|
maryyy1881
| 2023-03-11T15:04:31Z | 0 | 0 |
flair
|
[
"flair",
"art",
"music",
"finance",
"license:other",
"region:us"
] | null | 2023-03-11T14:56:03Z |
---
license: other
metrics:
- bleurt
library_name: flair
tags:
- art
- music
- finance
---
|
matthh/poca-SoccerTwos
|
matthh
| 2023-03-11T14:39:56Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-03-11T14:39:50Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: matthh/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
BobMcDear/vit_large_clip_patch14_clip_laion2b_ft_in1k_224
|
BobMcDear
| 2023-03-11T14:26:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-03-11T14:16:37Z |
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/vit_large_clip_patch14_clip_openai_ft_in1k_224
|
BobMcDear
| 2023-03-11T14:25:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-03-11T14:16:36Z |
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/vit_huge_clip_patch14_clip_laion2b_ft_in1k_224
|
BobMcDear
| 2023-03-11T14:25:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-03-11T14:16:36Z |
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/vit_base_clip_patch16_clip_laion2b_ft_in1k_384
|
BobMcDear
| 2023-03-11T14:25:43Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-03-11T14:16:35Z |
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/vit_base_clip_patch16_clip_laion2b_ft_in1k_224
|
BobMcDear
| 2023-03-11T14:25:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-03-11T14:16:34Z |
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/vit_base_clip_patch32_clip_openai_ft_in1k_224
|
BobMcDear
| 2023-03-11T14:25:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-03-11T14:16:34Z |
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
OpenAssistant/oasst-sft-1-pythia-12b
|
OpenAssistant
| 2023-03-11T14:25:14Z | 11,919 | 278 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"sft",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-03-09T16:47:26Z |
---
license: apache-2.0
language:
- en
tags:
- sft
pipeline_tag: text-generation
widget:
- text: <|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>
- text: <|prompter|>What's the Earth total population<|endoftext|><|assistant|>
- text: <|prompter|>Write a story about future of AI development<|endoftext|><|assistant|>
---
# Open-Assistant SFT-1 12B Model
This is the first iteration English supervised-fine-tuning (SFT) model of
the [Open-Assistant](https://github.com/LAION-AI/Open-Assistant) project.
It is based on a Pythia 12B that was fine-tuned on ~22k human demonstrations
of assistant conversations collected through the
[https://open-assistant.io/](https://open-assistant.io/) human feedback web
app before March 7, 2023.
## Model Details
- **Developed by:** [Open-Assistant Contributors](https://open-assistant.io/)
- **Model type:** Transformer-based Language Model
- **Language:** English
- **Finetuned from:** [EleutherAI / pythia-12b-deduped](https://huggingface.co/EleutherAI/pythia-12b-deduped)
- **Code:** [Open-Assistant/model/model_training](https://github.com/LAION-AI/Open-Assistant/tree/main/model/model_training)
- **Demo:** [Continuations for 250 random prompts](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-03-09_andreaskoepf_oasst-1_12b_7000_sampling_noprefix_lottery.json) ([sampling code](https://github.com/Open-Assistant/oasst-model-eval/blob/3d71f3be100c05cd8ddb568365e036a29fbff8c7/model_eval/manual/sampling_report.py)).
- **License:** Apache 2.0
- **Contact:** [Open-Assistant Discord](https://ykilcher.com/open-assistant-discord)
## Prompting
Two special tokens are used to mark the beginning of user and assistant turns:
`<|prompter|>` and `<|assistant|>`. Each turn ends with a `<|endoftext|>` token.
Input prompt example:
```
<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>
```
The input ends with the `<|assistant|>` token to signal that the model should
start generating the assistant reply.
Example of generated continuation (typical sampling 0.2):
```
A visual joke often accompanied by text that spreads rapidly from individual
to individual. The exact origin of the term "meme" is unclear, but it was
first used in the 1990s by Richard Dawkins in his book "The Selfish Gene."
He defined a meme as an idea, behavior, or style that is transmitted from
one person to another within a culture.
Dawkins' use of the term caught on among scholars and journalists, who began
to apply the concept to other types of cultural items such as advertisements,
fashion trends, and jokes. Over time, the term "meme" has come to be used
more generally to describe any social behavior that can be repeated and
altered by individuals. Today, the term is widely recognized and applied in
fields ranging from anthropology to psychology to marketing.<|endoftext|>
```
## Limitations
See limitations of Pythia 12B base model [here](https://huggingface.co/EleutherAI/pythia-12b-deduped#limitations-and-biases).
The model is known to fail horribly at answering math and coding questions.
Beware of hallucinations: Outputs are often factually wrong or misleading.
Replies might look convincing (at first glance) while containing completely
made up false statements.
This model is usable only for English conversations.
|
BobMcDear/vit_base_clip_patch16_clip_openai_ft_in1k_224
|
BobMcDear
| 2023-03-11T14:25:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-03-11T14:16:32Z |
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
EJaalborg2022/mt5-small-finetuned-beer-ctg-en
|
EJaalborg2022
| 2023-03-11T14:17:37Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"dataset:beer_reviews_label_drift_neg",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-11T08:51:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- beer_reviews_label_drift_neg
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-beer-ctg-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: beer_reviews_label_drift_neg
type: beer_reviews_label_drift_neg
config: default
split: validation
args: default
metrics:
- name: Rouge1
type: rouge
value: 26.1727
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-beer-ctg-en
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the beer_reviews_label_drift_neg dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5562
- Rouge1: 26.1727
- Rouge2: 15.5176
- Rougel: 26.1813
- Rougelsum: 26.5943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 5.6484 | 1.0 | 115 | 3.7051 | 14.9617 | 2.9719 | 14.9901 | 15.104 |
| 3.7622 | 2.0 | 230 | 3.2168 | 21.9807 | 0.0 | 21.8357 | 21.8277 |
| 3.4237 | 3.0 | 345 | 2.8767 | 8.9642 | 5.2358 | 8.9597 | 9.0307 |
| 2.8204 | 4.0 | 460 | 2.4836 | 21.9807 | 0.0 | 21.8357 | 21.8277 |
| 2.3335 | 5.0 | 575 | 2.2340 | 8.9642 | 5.2358 | 8.9597 | 9.0307 |
| 2.1118 | 6.0 | 690 | 1.9121 | 8.9642 | 5.2358 | 8.9597 | 9.0307 |
| 1.7533 | 7.0 | 805 | 1.6571 | 25.9752 | 13.5161 | 25.8609 | 26.2153 |
| 1.5755 | 8.0 | 920 | 1.5562 | 26.1727 | 15.5176 | 26.1813 | 26.5943 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
mazkooleg/0-9up-hubert-base-ls960-ft
|
mazkooleg
| 2023-03-11T14:04:39Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:mazkooleg/0-9up_google_speech_commands_augmented_raw",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-03-11T13:35:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: hubert-base-ls960-ft
results: []
datasets:
- mazkooleg/0-9up_google_speech_commands_augmented_raw
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-base-ls960-ft
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0123
- Accuracy: 0.9973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:---------------:|
| 0.1205 | 1.0 | 8558 | 0.9955 | 0.0173 |
| 0.0638 | 2.0 | 17116 | 0.9973 | 0.0123 |
| 0.0747 | 3.0 | 25674 | 0.9964 | 0.0183 |
| 0.0636 | 4.0 | 34232 | 0.9958 | 0.0201 |
| 0.0531 | 5.0 | 42790 | 0.9967 | 0.0168 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cpu
- Datasets 2.10.1
- Tokenizers 0.12.1
|
Feldi/PixelCopter-reinforce-v2
|
Feldi
| 2023-03-11T13:44:17Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-11T13:44:14Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: PixelCopter-reinforce-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 24.00 +/- 30.06
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Abdullah007/autoencoder-keras-mnist-demo
|
Abdullah007
| 2023-03-11T13:37:26Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"image-classification",
"region:us"
] |
image-classification
| 2023-03-11T13:35:56Z |
---
library_name: keras
pipeline_tag: image-classification
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
Feldi/Reinforce-PoleBalancing
|
Feldi
| 2023-03-11T12:41:43Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-11T12:41:38Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PoleBalancing
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 254.00 +/- 7.28
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
huggingtweets/jacksepticeye
|
huggingtweets
| 2023-03-11T12:27:49Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-03-11T12:26:35Z |
---
language: en
thumbnail: http://www.huggingtweets.com/jacksepticeye/1678537664774/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1506613842633232400/ILPlMQ63_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jacksepticeye</div>
<div style="text-align: center; font-size: 14px;">@jacksepticeye</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jacksepticeye.
| Data | Jacksepticeye |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 107 |
| Short tweets | 646 |
| Tweets kept | 2495 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/10bvtkin/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jacksepticeye's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/65s277k4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/65s277k4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jacksepticeye')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
helenai/distilbert-base-uncased-distilled-squad-ov-fp32
|
helenai
| 2023-03-11T12:26:36Z | 293 | 0 |
transformers
|
[
"transformers",
"openvino",
"distilbert",
"question-answering",
"en",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-03-11T12:26:11Z |
---
language:
- en
tags:
- openvino
---
# distilbert-base-uncased-distilled-squad
This is the [distilbert-base-uncased-distilled-squad](https://huggingface.co/distilbert-base-uncased-distilled-squad) model converted to [OpenVINO](https://openvino.ai), for accellerated inference.
An example of how to do inference on this model:
```python
from optimum.intel.openvino import OVModelForQuestionAnswering
from transformers import AutoTokenizer, pipeline
# model_id should be set to either a local directory or a model available on the HuggingFace hub.
model_id = "helenai/distilbert-base-uncased-distilled-squad-ov-fp32"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForQuestionAnswering.from_pretrained(model_id)
pipe = pipeline("question-answering", model=model, tokenizer=tokenizer)
result = pipe("What is OpenVINO?", "OpenVINO is a framework that accelerates deep learning inferencing")
print(result)
```
|
Taratata/rl_course_vizdoom_health_gathering_supreme
|
Taratata
| 2023-03-11T11:51:20Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-11T11:46:42Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.48 +/- 4.69
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Taratata/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
bibekbehera/autotrain-numeric_prediction-40376105019
|
bibekbehera
| 2023-03-11T11:40:14Z | 3 | 0 |
transformers
|
[
"transformers",
"joblib",
"xgboost",
"autotrain",
"tabular",
"regression",
"tabular-regression",
"dataset:bibekbehera/autotrain-data-numeric_prediction",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
tabular-regression
| 2023-03-11T10:41:31Z |
---
tags:
- autotrain
- tabular
- regression
- tabular-regression
datasets:
- bibekbehera/autotrain-data-numeric_prediction
co2_eq_emissions:
emissions: 0.09875665677327088
---
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 40376105019
- CO2 Emissions (in grams): 0.0988
## Validation Metrics
- Loss: 0.152
- R2: 0.659
- MSE: 0.023
- MAE: 0.062
- RMSLE: 0.105
## Usage
```python
import json
import joblib
import pandas as pd
model = joblib.load('model.joblib')
config = json.load(open('config.json'))
features = config['features']
# data = pd.read_csv("data.csv")
data = data[features]
data.columns = ["feat_" + str(col) for col in data.columns]
predictions = model.predict(data) # or model.predict_proba(data)
```
|
projjal/en-fr-model-t5-small
|
projjal
| 2023-03-11T11:14:11Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-11T10:45:33Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: projjal/en-fr-model-t5-small
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# projjal/en-fr-model-t5-small
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0428
- Train Accuracy: 0.7179
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 0.002, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Epoch |
|:----------:|:--------------:|:-----:|
| 0.0428 | 0.7179 | 0 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
haddadalwi/bert-large-uncased-whole-word-masking-squad2-finetuned-squad2-islamic
|
haddadalwi
| 2023-03-11T11:13:20Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-03-08T16:02:57Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: bert-large-uncased-whole-word-masking-squad2-finetuned-squad2-islamic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-whole-word-masking-squad2-finetuned-squad2-islamic
This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6799
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6646 | 1.0 | 1000 | 0.6799 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
hasarinduperera/poca-SoccerTwos
|
hasarinduperera
| 2023-03-11T11:06:52Z | 40 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-03-11T11:06:41Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: hasarinduperera/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sherrytan/ppo-LunarLander-v2
|
sherrytan
| 2023-03-11T10:30:05Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-11T10:29:34Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 241.76 +/- 19.74
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LarryAIDraw/asakuraToruTHEIDOLM_v10
|
LarryAIDraw
| 2023-03-11T10:00:52Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-11T09:52:55Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/18125/asakura-toru-the-idolmster
|
projjal/de-en-model
|
projjal
| 2023-03-11T09:49:21Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-11T08:35:08Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: projjal/de-en-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# projjal/de-en-model
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3144
- Train Accuracy: 0.3364
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 0.002, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Epoch |
|:----------:|:--------------:|:-----:|
| 1.7760 | 0.2387 | 0 |
| 1.1744 | 0.2738 | 1 |
| 0.9260 | 0.2887 | 2 |
| 0.7539 | 0.2996 | 3 |
| 0.6160 | 0.3094 | 4 |
| 0.5233 | 0.3157 | 5 |
| 0.4458 | 0.3233 | 6 |
| 0.3891 | 0.3300 | 7 |
| 0.3480 | 0.3327 | 8 |
| 0.3144 | 0.3364 | 9 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
Alice10/psvision
|
Alice10
| 2023-03-11T09:24:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-03-10T02:10:14Z |
```bash
from psvis.models.backbones import resnet
resnet50 = resnet.__dict__['resnet50'](pretrained=True)
```
|
mizanu-zelalem/aiornot-Resnet50
|
mizanu-zelalem
| 2023-03-11T09:03:44Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-03-11T08:38:46Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model is built for a coding challenge for Fatima Fellowship 2023 Application. It is for classifing an image as ai generated or real.
## Model Details
The model was built using ResNet50 pretrained. The skeleton of the model is
base = ResNet50(include_top=False, weights='imagenet', input_shape=(128,128,3))
for layer in base.layers:
layer.trainable = False
output = base.output
output = Flatten()(output)
output = Dense(256, activation='relu')(output)
output = Dense(128, activation='relu')(output)
output = Dropout(rate=0.3)(output)
output = Dense(1, activation = "sigmoid")(output)
model = Model(base.input, output)
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Mizanu Zelalem]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:**
- Colab notebook of the project can be found here : https://colab.research.google.com/drive/1-50OF83EwzezdGAV0Tna_b5gvC7qjSi3?usp=sharing
- model.h5 can be accesed from here : https://drive.google.com/file/d/10aqeUV6RkhnSRuc1OLKl0uOMXdo9DouQ/view?usp=sharing
|
JerryChou/contentvec
|
JerryChou
| 2023-03-11T08:55:06Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2023-03-11T08:38:59Z |
---
license: mit
---
# 严厉谴责某些傻逼开源维护者(E.g. [innnky](https://github.com/innnky))在上游拉屎删库导致下游项目完全不能运行
# 下游开发者都没急你删什么库呢?你不维护了不用给各位解释?你不能留个repo?GitHub上项目删了不说 你底模都删了我就绷不住了 你删库也给个原因啊 不能把模型删了留个README不行?
# 测你吗的 家里网络不行 等下载完了补上模型
|
XRandomForest/mymodel2
|
XRandomForest
| 2023-03-11T08:20:47Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-03-11T08:20:19Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 0 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 48 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.dataloader._InfiniteConstantSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`setfit.modeling.SupConLoss`
Parameters of the fit()-Method:
```
{
"epochs": 12,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 576,
"warmup_steps": 58,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_mix_tokens': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
justinian336/salvadoran-news-summarizer-base-auto
|
justinian336
| 2023-03-11T07:54:55Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"summarization",
"es",
"dataset:justinian336/salvadoran-news",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-03-07T16:38:27Z |
---
datasets:
- justinian336/salvadoran-news
language:
- es
metrics:
- rouge
pipeline_tag: summarization
---
|
Vermillion-Qi/sd-class-butterflies-32
|
Vermillion-Qi
| 2023-03-11T07:33:01Z | 31 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-03-11T07:32:25Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Vermillion-Qi/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
Vorlde/ReinforcePixelcopter
|
Vorlde
| 2023-03-11T07:25:44Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-11T06:05:12Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: ReinforcePixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 25.80 +/- 16.47
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Senura/ppo-SnowballTarget
|
Senura
| 2023-03-11T06:51:58Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-03-11T06:50:45Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: Senura/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Vorlde/ReinforceCartPolev01
|
Vorlde
| 2023-03-11T06:45:36Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-11T05:16:03Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: ReinforceCartPolev01
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
bortle/astrophotography-object-classifier-alpha4
|
bortle
| 2023-03-11T06:41:09Z | 224 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"autotrain",
"vision",
"dataset:ppicazo/autotrain-data-astrophotography-object-classifier-alpha4",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-03-11T06:37:19Z |
---
tags:
- autotrain
- vision
- image-classification
datasets:
- ppicazo/autotrain-data-astrophotography-object-classifier-alpha4
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.006629293486504155
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 40333104876
- CO2 Emissions (in grams): 0.0066
## Validation Metrics
- Loss: 0.015
- Accuracy: 1.000
- Macro F1: 1.000
- Micro F1: 1.000
- Weighted F1: 1.000
- Macro Precision: 1.000
- Micro Precision: 1.000
- Weighted Precision: 1.000
- Macro Recall: 1.000
- Micro Recall: 1.000
- Weighted Recall: 1.000
|
Paperbag/a2c-PandaReachDense-v2
|
Paperbag
| 2023-03-11T06:27:44Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-09T09:23:47Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.77 +/- 0.64
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
toastynews/electra-hongkongese-base-hkt-ws
|
toastynews
| 2023-03-11T06:18:51Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"electra",
"token-classification",
"generated_from_trainer",
"yue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-03-11T06:16:55Z |
---
language: yue
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: electra-hongkongese-base-hkt-ws
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-hongkongese-base-hkt-ws
This model is a fine-tuned version of [toastynews/electra-hongkongese-base-discriminator](https://huggingface.co/toastynews/electra-hongkongese-base-discriminator) on [HKCanCor](https://pycantonese.org/data.html#built-in-data), [CityU and AS](http://sighan.cs.uchicago.edu/bakeoff2005/) for word segmentation.
## Model description
Performs word segmentation on text from Hong Kong.
There are two versions; hk trained with only text from Hong Kong, and hkt trained with text from Hong Kong and Taiwan. Each version have base and small model sizes.
## Intended uses & limitations
Trained to handle both Hongkongese/Cantonese and Standard Chinese from Hong Kong. Text from other places and English do not work as well.
The easiest way is to use with the CKIP Transformers libary.
## Training and evaluation data
HKCanCor, CityU and AS are converted to BI-encoded word segmentation dataset in Hugging Face format using code from [finetune-ckip-transformers](https://github.com/toastynews/finetune-ckip-transformers).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
|dataset |token_f |token_p |token_r |
|:---------|--------|--------|--------|
|ud yue_hk | 0.9404| 0.9442| 0.9367|
|ud zh_hk | 0.9327| 0.9404| 0.9251|
|_hkcancor_|_0.9875_|_0.9868_|_0.9883_|
|cityu | 0.9766| 0.9756| 0.9777|
|as | 0.9652| 0.9601| 0.9704|
_Was trained on hkcancor. Reported for reference only._
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.10.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
avoroshilov/dqn-SpaceInvadersNoFrameskip-v4
|
avoroshilov
| 2023-03-11T06:11:55Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-11T06:11:26Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 716.50 +/- 329.73
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga avoroshilov -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga avoroshilov -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga avoroshilov
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
toastynews/electra-hongkongese-small-hkt-ws
|
toastynews
| 2023-03-11T06:04:15Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"electra",
"token-classification",
"generated_from_trainer",
"yue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-03-11T06:00:37Z |
---
language: yue
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: electra-hongkongese-small-hkt-ws
results: []
---
# electra-hongkongese-small-hkt-ws
This model is a fine-tuned version of [toastynews/electra-hongkongese-small-discriminator](https://huggingface.co/toastynews/electra-hongkongese-small-discriminator) on [HKCanCor](https://pycantonese.org/data.html#built-in-data), [CityU and AS](http://sighan.cs.uchicago.edu/bakeoff2005/) for word segmentation.
## Model description
Performs word segmentation on text from Hong Kong.
There are two versions; hk trained with only text from Hong Kong, and hkt trained with text from Hong Kong and Taiwan. Each version have base and small model sizes.
## Intended uses & limitations
Trained to handle both Hongkongese/Cantonese and Standard Chinese from Hong Kong. Text from other places and English do not work as well.
The easiest way is to use with the CKIP Transformers libary.
## Training and evaluation data
HKCanCor, CityU and AS are converted to BI-encoded word segmentation dataset in Hugging Face format using code from [finetune-ckip-transformers](https://github.com/toastynews/finetune-ckip-transformers).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
|dataset |token_f |token_p |token_r |
|:---------|--------|--------|--------|
|ud yue_hk | 0.9389| 0.9429| 0.9350|
|ud zh_hk | 0.9314| 0.9398| 0.9231|
|_hkcancor_|_0.9807_|_0.9798_|_0.9816_|
|cityu | 0.9712| 0.9705| 0.9718|
|as | 0.9644| 0.9611| 0.9678|
_Was trained on hkcancor. Reported for reference only._
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.10.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
NawinCom/my_awesome_model_3
|
NawinCom
| 2023-03-11T06:00:49Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-11T04:29:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model_3
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0954
- Accuracy: 0.9680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.09 | 200 | 0.2369 | 0.9040 |
| No log | 0.19 | 400 | 0.1859 | 0.9324 |
| 0.2931 | 0.28 | 600 | 0.1624 | 0.9442 |
| 0.2931 | 0.38 | 800 | 0.1194 | 0.9569 |
| 0.1456 | 0.47 | 1000 | 0.1245 | 0.9588 |
| 0.1456 | 0.57 | 1200 | 0.1044 | 0.9617 |
| 0.1456 | 0.66 | 1400 | 0.1063 | 0.9611 |
| 0.1194 | 0.75 | 1600 | 0.1021 | 0.9634 |
| 0.1194 | 0.85 | 1800 | 0.1618 | 0.9490 |
| 0.1107 | 0.94 | 2000 | 0.1113 | 0.9643 |
| 0.1107 | 1.04 | 2200 | 0.1163 | 0.9630 |
| 0.1107 | 1.13 | 2400 | 0.0954 | 0.9680 |
| 0.079 | 1.22 | 2600 | 0.1272 | 0.9635 |
| 0.079 | 1.32 | 2800 | 0.0976 | 0.9657 |
| 0.0715 | 1.41 | 3000 | 0.0995 | 0.9680 |
| 0.0715 | 1.51 | 3200 | 0.0996 | 0.9660 |
| 0.0715 | 1.6 | 3400 | 0.1001 | 0.9670 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Fr33Radical/Auxi
|
Fr33Radical
| 2023-03-11T05:47:10Z | 0 | 0 | null |
[
"Student Loans",
"en",
"dataset:openwebtext",
"arxiv:1910.09700",
"license:other",
"region:us"
] | null | 2023-03-11T05:42:30Z |
---
license: other
datasets:
- openwebtext
language:
- en
tags:
- Student Loans
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models Its a Auxi AI Bot. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
You are a smart AuxI the smart AI Assistant at Auxilo who answers all questions pertaining to Auxilo and the Loans it offers to Students
You are well versed with the loan process and there by can engage with a new prospect and get their details on course, when they plan to study, where their college information, as well as their email and mobile numbers to create a lead
You are an empathetic sales AI who is able to identify and and empathize with the student and counsel them with the right information proactively and intuitively
You are Always cheerful, positive, and inspiring in your tone of voice
Your answers are short and crisp
You are able to generate the top 3 questions for the day as a preset to the chat and it shifts every other day or periodically as per the trend of questions asked and answered by you.
You are constantly learning from customer interactions and improving yourself to be better at the job.
You are able to provide personalized recommendations based on the user’s financial needs, goals and risk appetite.
You are able to provide basic education around financial concepts and terminology.
You are also able to explain the different loan options available in detail and suggest the best option for the user.
You are able to anticipate customer needs and proactively provide them with relevant information.
You are able to provide quick and accurate responses to customer queries.
You are able to track customer’s journey and provide them with the best experience during the loan application process.
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NawinCom/my_awesome_model_2
|
NawinCom
| 2023-03-11T05:40:15Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-11T04:16:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model_2
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0950
- Accuracy: 0.9655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.09 | 200 | 0.4448 | 0.8383 |
| No log | 0.19 | 400 | 0.2267 | 0.9250 |
| 0.2842 | 0.28 | 600 | 0.1300 | 0.9532 |
| 0.2842 | 0.38 | 800 | 0.1689 | 0.9437 |
| 0.1348 | 0.47 | 1000 | 0.1445 | 0.9530 |
| 0.1348 | 0.57 | 1200 | 0.1237 | 0.9554 |
| 0.1348 | 0.66 | 1400 | 0.1103 | 0.9624 |
| 0.1198 | 0.75 | 1600 | 0.0950 | 0.9655 |
| 0.1198 | 0.85 | 1800 | 0.1115 | 0.9642 |
| 0.1053 | 0.94 | 2000 | 0.1192 | 0.9603 |
| 0.1053 | 1.04 | 2200 | 0.1102 | 0.9622 |
| 0.1053 | 1.13 | 2400 | 0.1096 | 0.9650 |
| 0.0716 | 1.22 | 2600 | 0.1212 | 0.9650 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
NawinCom/my_awesome_model_4
|
NawinCom
| 2023-03-11T05:39:19Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-11T04:29:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model_4
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0866
- Accuracy: 0.9644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.09 | 200 | 0.2342 | 0.9098 |
| No log | 0.19 | 400 | 0.2575 | 0.9178 |
| 0.2768 | 0.28 | 600 | 0.1922 | 0.9316 |
| 0.2768 | 0.38 | 800 | 0.1274 | 0.9566 |
| 0.1402 | 0.47 | 1000 | 0.1153 | 0.9566 |
| 0.1402 | 0.57 | 1200 | 0.1516 | 0.9472 |
| 0.1402 | 0.66 | 1400 | 0.0866 | 0.9644 |
| 0.126 | 0.75 | 1600 | 0.1375 | 0.9572 |
| 0.126 | 0.85 | 1800 | 0.1084 | 0.9638 |
| 0.1068 | 0.94 | 2000 | 0.0998 | 0.9655 |
| 0.1068 | 1.04 | 2200 | 0.1060 | 0.9656 |
| 0.1068 | 1.13 | 2400 | 0.0999 | 0.9689 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
NawinCom/my_awesome_model
|
NawinCom
| 2023-03-11T05:38:58Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-10T13:51:48Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1455
- Accuracy: 0.9582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.09 | 200 | 0.1455 | 0.9582 |
| No log | 0.19 | 400 | 0.1849 | 0.9604 |
| 0.0446 | 0.28 | 600 | 0.1580 | 0.9593 |
| 0.0446 | 0.38 | 800 | 0.1968 | 0.9545 |
| 0.0635 | 0.47 | 1000 | 0.1853 | 0.9603 |
| 0.0635 | 0.57 | 1200 | 0.1476 | 0.9589 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
NawinCom/my_awesome_model_1
|
NawinCom
| 2023-03-11T04:53:45Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-11T04:08:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model_1
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0896
- Accuracy: 0.9640
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.38 | 200 | 0.1490 | 0.9448 |
| No log | 0.75 | 400 | 0.1117 | 0.9582 |
| 0.1981 | 1.13 | 600 | 0.0939 | 0.9621 |
| 0.1981 | 1.51 | 800 | 0.0896 | 0.9640 |
| 0.0706 | 1.88 | 1000 | 0.0975 | 0.9641 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
r4zzchaudhary/tathyanka-nlq-final
|
r4zzchaudhary
| 2023-03-11T04:48:52Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-10T23:34:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: tathyanka-nlq-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tathyanka-nlq-final
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0042
- Rouge2 Precision: 0.8529
- Rouge2 Recall: 0.404
- Rouge2 Fmeasure: 0.5477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| No log | 1.0 | 315 | 0.0138 | 0.8516 | 0.4034 | 0.5469 |
| 0.474 | 2.0 | 630 | 0.0081 | 0.8516 | 0.4034 | 0.5469 |
| 0.474 | 3.0 | 945 | 0.0054 | 0.853 | 0.4041 | 0.5478 |
| 0.0163 | 4.0 | 1260 | 0.0046 | 0.8529 | 0.404 | 0.5477 |
| 0.0093 | 5.0 | 1575 | 0.0042 | 0.8529 | 0.404 | 0.5477 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
tetrismegistus/PPO-LunarLander-v2
|
tetrismegistus
| 2023-03-11T04:00:57Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-11T03:13:55Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 258.05 +/- 14.42
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Kentris/ML-Agents-Soccer
|
Kentris
| 2023-03-11T03:50:18Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"onnx",
"ML-Agents-SoccerTwos",
"reinforcement-learning",
"region:us"
] |
reinforcement-learning
| 2023-03-11T03:37:01Z |
---
task: reinforcement-learning
tags:
- ML-Agents-SoccerTwos
- reinforcement-learning
library_name: ml-agents
---
|
Kentris/a2c-PandaReachDense-v2
|
Kentris
| 2023-03-11T03:29:29Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-11T02:09:28Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.57 +/- 0.81
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
huoyan/lora_outputs_corgi_v1
|
huoyan
| 2023-03-11T03:20:06Z | 0 | 0 | null |
[
"paddlepaddle",
"stable-diffusion",
"stable-diffusion-ppdiffusers",
"text-to-image",
"ppdiffusers",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-03-11T03:19:11Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks corgi
tags:
- stable-diffusion
- stable-diffusion-ppdiffusers
- text-to-image
- ppdiffusers
- lora
inference: false
---
# LoRA DreamBooth - huoyan/lora_outputs_corgi_v1
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks corgi using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.





|
sptrodon/Reinforce-Pixelcopter-PLE-v0
|
sptrodon
| 2023-03-11T01:25:10Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-11T01:25:01Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 68.80 +/- 52.35
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Kentris/a2c-AntBulletEnv-v0
|
Kentris
| 2023-03-11T01:21:27Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-11T01:11:14Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 959.12 +/- 226.92
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
vocabtrimmer/mt5-small-itquad-qa-trimmed-it
|
vocabtrimmer
| 2023-03-11T01:07:33Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-11T00:43:02Z |
# Vocabulary Trimmed [lmqg/mt5-small-itquad-qa](https://huggingface.co/lmqg/mt5-small-itquad-qa): `vocabtrimmer/mt5-small-itquad-qa-trimmed-it`
This model is a trimmed version of [lmqg/mt5-small-itquad-qa](https://huggingface.co/lmqg/mt5-small-itquad-qa) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-itquad-qa | vocabtrimmer/mt5-small-itquad-qa-trimmed-it |
|:---------------------------|:---------------------------|:----------------------------------------------|
| parameter_size_full | 300,165,504 | 157,784,448 |
| parameter_size_embedding | 256,103,424 | 113,722,368 |
| vocab_size | 250,101 | 111,057 |
| compression_rate_full | 100.0 | 52.57 |
| compression_rate_embedding | 100.0 | 44.4 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:|
| it | vocabtrimmer/mc4_validation | text | it | validation | | 2 |
|
steveice/videomae-base-finetuned-engine-subset-20230310
|
steveice
| 2023-03-11T00:47:58Z | 62 | 0 |
transformers
|
[
"transformers",
"pytorch",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-03-10T23:08:16Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-engine-subset-20230310
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-engine-subset-20230310
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4958
- Accuracy: 0.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 600
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.5947 | 0.05 | 31 | 2.5383 | 0.15 |
| 2.4195 | 1.05 | 62 | 2.5108 | 0.15 |
| 2.2476 | 2.05 | 93 | 2.0533 | 0.225 |
| 1.9449 | 3.05 | 124 | 2.0719 | 0.2375 |
| 1.5724 | 4.05 | 155 | 1.4756 | 0.475 |
| 1.395 | 5.05 | 186 | 1.2884 | 0.5 |
| 1.0822 | 6.05 | 217 | 1.0679 | 0.575 |
| 1.0635 | 7.05 | 248 | 0.8040 | 0.7 |
| 0.8707 | 8.05 | 279 | 0.9334 | 0.525 |
| 0.7042 | 9.05 | 310 | 0.6477 | 0.75 |
| 0.6543 | 10.05 | 341 | 0.6963 | 0.7375 |
| 0.6807 | 11.05 | 372 | 0.4958 | 0.85 |
| 0.4924 | 12.05 | 403 | 0.6374 | 0.775 |
| 0.4822 | 13.05 | 434 | 0.6145 | 0.75 |
| 0.4878 | 14.05 | 465 | 0.6274 | 0.7625 |
| 0.4442 | 15.05 | 496 | 0.4231 | 0.85 |
| 0.2739 | 16.05 | 527 | 0.4999 | 0.85 |
| 0.3514 | 17.05 | 558 | 0.4639 | 0.8375 |
| 0.4158 | 18.05 | 589 | 0.4291 | 0.85 |
| 0.2689 | 19.02 | 600 | 0.4294 | 0.85 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.12.1+cu113
- Datasets 2.10.1
- Tokenizers 0.13.2
|
DarkAgent/Dungeons_N_Waifus
|
DarkAgent
| 2023-03-11T00:33:22Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-11T00:33:22Z |
---
license: creativeml-openrail-m
---
|
Corianas/CartPole-v1
|
Corianas
| 2023-03-11T00:31:23Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-11T00:31:13Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Hyungsuk/Taxi-v3
|
Hyungsuk
| 2023-03-11T00:31:12Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-11T00:31:03Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Hyungsuk/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Hyungsuk/q-FrozenLake-v1-4x4-noSlippery
|
Hyungsuk
| 2023-03-11T00:30:18Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-11T00:30:08Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Hyungsuk/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dansa08/mbart-neutralization
|
dansa08
| 2023-03-11T00:11:42Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"Text Neutralization",
"Inclusive Language",
"es",
"dataset:hackathon-pln-es/neutral-es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-10T22:19:37Z |
---
license: apache-2.0
datasets:
- hackathon-pln-es/neutral-es
language:
- es
metrics:
- bleu
tags:
- Text Neutralization
- Inclusive Language
---
|
Corianas/Corianas-Taxi-v3
|
Corianas
| 2023-03-11T00:09:18Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-11T00:09:13Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Corianas-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Corianas/Corianas-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Sirianth/Pyramids
|
Sirianth
| 2023-03-11T00:04:56Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-03-11T00:04:49Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: Sirianth/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
nsecord/Reinforce-Pixelcopter-PLE-v0
|
nsecord
| 2023-03-10T23:56:01Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-10T23:55:33Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 34.76 +/- 30.19
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Cesar514/LunarLander-v2
|
Cesar514
| 2023-03-10T23:40:58Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-10T22:39:04Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.13 +/- 20.60
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CCHF/SD_Models
|
CCHF
| 2023-03-10T23:38:16Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-10T22:35:59Z |
---
license: creativeml-openrail-m
---
This is a clone copy of Stable Diffusion Models from Civitai for Colab training purposes.
|
Mayhem50/bert-base-multilingual-cased-nli
|
Mayhem50
| 2023-03-10T23:08:19Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-03-10T23:03:15Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Mayhem50/bert-base-multilingual-cased-nli
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Mayhem50/bert-base-multilingual-cased-nli')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Mayhem50/bert-base-multilingual-cased-nli')
model = AutoModel.from_pretrained('Mayhem50/bert-base-multilingual-cased-nli')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 905 with parameters:
```
{'batch_size': 624}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MNRLGradCache`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 90,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.00032
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 91,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 150, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
YoussefHilaly/GRU-EMOTION-AR-TEXT-72P
|
YoussefHilaly
| 2023-03-10T23:00:01Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2023-03-10T22:58:45Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| learning_rate | 9.999999747378752e-05 |
| decay | 0.0 |
| beta_1 | 0.8999999761581421 |
| beta_2 | 0.9990000128746033 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
spacemanidol/flan-t5-base-5-6-xsum
|
spacemanidol
| 2023-03-10T22:50:14Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-02-28T18:21:32Z |
---
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: base-5-6
results:
- task:
name: Summarization
type: summarization
dataset:
name: xsum
type: xsum
config: default
split: validation
args: default
metrics:
- name: Rouge1
type: rouge
value: 39.0404
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# base-5-6
This model is a fine-tuned version of [x/base-5-6/](https://huggingface.co/x/base-5-6/) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6972
- Rouge1: 39.0404
- Rouge2: 15.9169
- Rougel: 31.2288
- Rougelsum: 31.2183
- Gen Len: 26.8873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.10.0
- Tokenizers 0.13.2
|
qgallouedec/sample-factory-reach-wall-v2
|
qgallouedec
| 2023-03-10T22:44:52Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-10T22:44:48Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: reach-wall-v2
type: reach-wall-v2
metrics:
- type: mean_reward
value: 4825.23 +/- 9.56
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **reach-wall-v2** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r qgallouedec/sample-factory-reach-wall-v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m enjoy --algo=APPO --env=reach-wall-v2 --train_dir=./train_dir --experiment=sample-factory-reach-wall-v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m train --algo=APPO --env=reach-wall-v2 --train_dir=./train_dir --experiment=sample-factory-reach-wall-v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Gabcsor/poca-SoccerTwos
|
Gabcsor
| 2023-03-10T22:44:17Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-03-10T22:40:38Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Gabcsor/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
janzw/Reinforce-copterv1
|
janzw
| 2023-03-10T22:23:48Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-10T22:23:40Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-copterv1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 27.33 +/- 18.83
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Nalenczewski/keyword_category_classifier_v4
|
Nalenczewski
| 2023-03-10T22:23:26Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-10T22:16:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: keyword_category_classifier_v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# keyword_category_classifier_v4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2337
- Accuracy: 0.9266
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7312 | 1.0 | 986 | 0.2616 | 0.9165 |
| 0.2264 | 2.0 | 1972 | 0.2337 | 0.9266 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Jclementg/q-FrozenLake-v1-4x4-noSlippery
|
Jclementg
| 2023-03-10T22:22:19Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-10T22:22:17Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Jclementg/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Cesar514/PPO2-LunarLander-v3
|
Cesar514
| 2023-03-10T22:03:32Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-10T22:01:30Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -183.18 +/- 71.91
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 100000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Cesar514/PPO2-LunarLander-v3'
'batch_size': 512
'minibatch_size': 128}
```
|
c0d3s/bucket-not-bucket
|
c0d3s
| 2023-03-10T22:00:29Z | 234 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-03-10T21:50:51Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: bucket-not-bucket
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9777777791023254
---
# bucket-not-bucket
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### bucket

#### not bucket

|
Cesar514/ppo-LunarLander-v2
|
Cesar514
| 2023-03-10T21:59:27Z | 2 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2022-12-17T20:06:25Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -130.73 +/- 72.89
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Cesar514/ppo-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
adam1brownell/rl_course_vizdoom_health_gathering_supreme
|
adam1brownell
| 2023-03-10T21:55:43Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-10T21:55:35Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.86 +/- 5.00
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r adam1brownell/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
cthiriet/Reinforce-Pixelcopter-PLE-v0
|
cthiriet
| 2023-03-10T21:55:20Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-07T23:10:52Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 24.80 +/- 14.88
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
gauravkuppa/ppo-SnowballTarget1
|
gauravkuppa
| 2023-03-10T21:39:08Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-03-10T21:39:03Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: gauravkuppa/ppo-SnowballTarget1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
qgallouedec/sample-factory-reach-v2
|
qgallouedec
| 2023-03-10T21:18:49Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-10T21:18:45Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: reach-v2
type: reach-v2
metrics:
- type: mean_reward
value: 4856.62 +/- 17.45
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **reach-v2** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r qgallouedec/sample-factory-reach-v2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m enjoy --algo=APPO --env=reach-v2 --train_dir=./train_dir --experiment=sample-factory-reach-v2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m train --algo=APPO --env=reach-v2 --train_dir=./train_dir --experiment=sample-factory-reach-v2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
dataLearning/dqn-SpaceInvadersNoFrameskip-v4
|
dataLearning
| 2023-03-10T21:15:42Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-10T21:15:02Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 486.00 +/- 93.48
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dataLearning -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dataLearning -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga dataLearning
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
fzaghloul/vit-base-patch16-224-finetuned-flower
|
fzaghloul
| 2023-03-10T20:23:30Z | 176 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-03-10T20:09:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: vit-base-patch16-224-finetuned-flower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
spacemanidol/flan-t5-base-6-2-xsum
|
spacemanidol
| 2023-03-10T20:15:44Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-02-27T22:49:09Z |
---
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: base-6-2
results:
- task:
name: Summarization
type: summarization
dataset:
name: xsum
type: xsum
config: default
split: validation
args: default
metrics:
- name: Rouge1
type: rouge
value: 38.0368
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# base-6-2
This model is a fine-tuned version of [x/6-2/](https://huggingface.co/x/6-2/) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0713
- Rouge1: 38.0368
- Rouge2: 15.1872
- Rougel: 30.6474
- Rougelsum: 30.6344
- Gen Len: 26.3968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.12.1
|
steveice/videomae-base-finetuned-engine-subset
|
steveice
| 2023-03-10T20:02:38Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-03-10T19:33:03Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-engine-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-engine-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5634
- Accuracy: 0.475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 224
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6687 | 0.25 | 57 | 2.5948 | 0.15 |
| 2.3001 | 1.25 | 114 | 2.2452 | 0.175 |
| 2.1531 | 2.25 | 171 | 1.9180 | 0.3875 |
| 1.6332 | 3.24 | 224 | 1.5634 | 0.475 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.12.1+cu113
- Datasets 2.10.1
- Tokenizers 0.13.2
|
JTredup/dqn-SpaceInvadersNoFrameskip-v4
|
JTredup
| 2023-03-10T19:53:30Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-10T19:52:44Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 706.00 +/- 177.17
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga JTredup -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga JTredup -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga JTredup
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
vagnereletronica/sd-class-butterflies-32
|
vagnereletronica
| 2023-03-10T19:48:10Z | 0 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"region:us"
] |
unconditional-image-generation
| 2023-03-10T19:46:13Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('vagnereletronica/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
NightMachinery/task_adapter_amazon_xlm_roberta_base_en_wiki_250
|
NightMachinery
| 2023-03-10T19:27:58Z | 1 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:sentiment/amazon",
"xlm-roberta",
"dataset:amazon",
"region:us"
] | null | 2023-03-10T18:53:14Z |
---
tags:
- adapter-transformers
- adapterhub:sentiment/amazon
- xlm-roberta
datasets:
- amazon
---
# Adapter `NightMachinery/task_adapter_amazon_xlm_roberta_base_en_wiki_250` for xlm-roberta-base
An [adapter](https://adapterhub.ml) for the `xlm-roberta-base` model that was trained on the [sentiment/amazon](https://adapterhub.ml/explore/sentiment/amazon/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("xlm-roberta-base")
adapter_name = model.load_adapter("NightMachinery/task_adapter_amazon_xlm_roberta_base_en_wiki_250", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
jackoyoungblood/rl_course_vizdoom_health_gathering_supreme
|
jackoyoungblood
| 2023-03-10T19:19:14Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-09T15:44:29Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_deadly_corridor
type: doom_deadly_corridor
metrics:
- type: mean_reward
value: 12.24 +/- 8.84
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_deadly_corridor** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r jackoyoungblood/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_deadly_corridor --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_deadly_corridor --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
NightMachinery/task_adapter_amazon_bert-base-multilingual-cased_en_wiki_250
|
NightMachinery
| 2023-03-10T19:10:30Z | 4 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:sentiment/amazon",
"bert",
"dataset:amazon",
"region:us"
] | null | 2023-03-10T19:10:29Z |
---
tags:
- adapter-transformers
- adapterhub:sentiment/amazon
- bert
datasets:
- amazon
---
# Adapter `NightMachinery/task_adapter_amazon_bert-base-multilingual-cased_en_wiki_250` for bert-base-multilingual-cased
An [adapter](https://adapterhub.ml) for the `bert-base-multilingual-cased` model that was trained on the [sentiment/amazon](https://adapterhub.ml/explore/sentiment/amazon/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-base-multilingual-cased")
adapter_name = model.load_adapter("NightMachinery/task_adapter_amazon_bert-base-multilingual-cased_en_wiki_250", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
TobiTob/decision_transformer_no_training
|
TobiTob
| 2023-03-10T19:09:22Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"decision_transformer",
"generated_from_trainer",
"dataset:city_learn",
"endpoints_compatible",
"region:us"
] | null | 2023-03-10T19:08:54Z |
---
tags:
- generated_from_trainer
datasets:
- city_learn
model-index:
- name: decision_transformer_no_training
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# decision_transformer_no_training
This model is a fine-tuned version of [](https://huggingface.co/) on the city_learn dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 240
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
NightMachinery/task_adapter_amazon_xlm_roberta_base_en_wiki
|
NightMachinery
| 2023-03-10T18:01:18Z | 1 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:sentiment/amazon",
"xlm-roberta",
"dataset:amazon",
"region:us"
] | null | 2023-03-10T18:01:14Z |
---
tags:
- adapterhub:sentiment/amazon
- xlm-roberta
- adapter-transformers
datasets:
- amazon
---
# Adapter `NightMachinery/task_adapter_amazon_xlm_roberta_base_en_wiki` for xlm-roberta-base
An [adapter](https://adapterhub.ml) for the `xlm-roberta-base` model that was trained on the [sentiment/amazon](https://adapterhub.ml/explore/sentiment/amazon/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("xlm-roberta-base")
adapter_name = model.load_adapter("NightMachinery/task_adapter_amazon_xlm_roberta_base_en_wiki", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
kurohige/rl_course_vizdoom_health_gathering_supreme
|
kurohige
| 2023-03-10T17:41:27Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-10T17:41:13Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.09 +/- 4.15
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r kurohige/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
agcagc/poca-SoccerTwos-v3
|
agcagc
| 2023-03-10T17:33:09Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-03-10T17:32:32Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: agcagc/poca-SoccerTwos-v3
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
HarmlessFlower/DRL_Course_Models
|
HarmlessFlower
| 2023-03-10T17:12:16Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-10T17:11:11Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.94 +/- 19.97
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.