modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-31 06:26:39
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 530
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-31 06:26:13
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
magnustragardh/rl_course_vizdoom_health_gathering_supreme
|
magnustragardh
| 2023-07-30T20:09:30Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-30T20:08:15Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.50 +/- 5.76
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r magnustragardh/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Milanesa16/SusyDiaz
|
Milanesa16
| 2023-07-30T20:08:32Z | 0 | 0 | null |
[
"peru",
"rvc",
"rmvpe",
"susydiaz",
"es",
"license:openrail",
"region:us"
] | null | 2023-07-30T20:00:39Z |
---
license: openrail
language:
- es
tags:
- peru
- rvc
- rmvpe
- susydiaz
---
|
Lukee4/biomedlm-gc2019-redacted
|
Lukee4
| 2023-07-30T19:57:09Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-27T18:38:23Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
KingKazma/cnn_dailymail_gpt2_lora_500_10_3000_8_e8_s6789_v3_l5_r2
|
KingKazma
| 2023-07-30T19:56:33Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T19:56:32Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e9_s6789_v3_l6_v50
|
KingKazma
| 2023-07-30T19:55:37Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T19:55:34Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_lora_500_10_3000_8_e7_s6789_v3_l5_r2
|
KingKazma
| 2023-07-30T19:49:25Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T19:49:24Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e8_s6789_v3_l6_v50
|
KingKazma
| 2023-07-30T19:48:01Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T19:47:57Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Lukee4/biomedlm-gc2019-synthetic
|
Lukee4
| 2023-07-30T19:41:48Z | 5 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T19:41:46Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e6_s6789_v3_l6_v50
|
KingKazma
| 2023-07-30T19:32:47Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T19:32:43Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e5_s6789_v3_l6_v50
|
KingKazma
| 2023-07-30T19:25:10Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T19:25:07Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_lora_500_10_3000_8_e2_s6789_v3_l5_r2
|
KingKazma
| 2023-07-30T19:13:44Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T18:13:45Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e3_s6789_v3_l6_v50
|
KingKazma
| 2023-07-30T19:09:57Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T18:23:35Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
LarryAIDraw/sempai_multimerge
|
LarryAIDraw
| 2023-07-30T19:03:33Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-30T18:57:11Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/115758/sempai-magical-sempai
|
LarryAIDraw/Amagi_wending_waters_serene_lotus-Azur_Lane
|
LarryAIDraw
| 2023-07-30T19:03:06Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-30T18:56:20Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/118017/amagi-wending-waters-serene-lotus-azur-lane-character-lora
|
LarryAIDraw/hk416v1
|
LarryAIDraw
| 2023-07-30T19:02:50Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-30T18:55:40Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/118465/girls-frontline-hk416-hk416
|
LarryAIDraw/MarseillaisV1_0
|
LarryAIDraw
| 2023-07-30T19:02:40Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-30T18:55:14Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/119149/marseillais-or-azur-lane-or
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e2_s6789_v3_l6_v50
|
KingKazma
| 2023-07-30T19:02:22Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T18:15:28Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_lora_500_10_3000_8_e-1_s6789_v3_l5_r2
|
KingKazma
| 2023-07-30T18:52:20Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T17:51:50Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
HamZurger/Taxi-V3
|
HamZurger
| 2023-07-30T18:45:37Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-30T18:45:36Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-V3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="HamZurger/Taxi-V3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ethanconnelly2/falcon-7b-instruct-ft-adapters
|
ethanconnelly2
| 2023-07-30T18:30:39Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T16:36:11Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
KoalaAI/ChatSum-Large
|
KoalaAI
| 2023-07-30T18:09:27Z | 228 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain",
"summarization",
"chat",
"T5",
"en",
"dataset:DarwinAnim8or/autotrain-data-chatsum",
"dataset:samsum",
"license:apache-2.0",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-07-30T16:41:20Z |
---
tags:
- autotrain
- summarization
- chat
- T5
language:
- en
widget:
- text: >-
Emily: fancy a drink after work today? Kate: sure! Marta: Good idea!
Marta: Where? When? Emily: Maybe in the Pub X at the central station at
5.30? Kate: I may be closer to 6, traffic on my way Marta: Fine for me.
Marta: See you then, Ladies! Emily: Bye! see ya :* Kate: :*
example_title: Meeting at the Pub
- text: >-
Harry: heyyyy are you there?? Cindy: Yes dear what is it? Harry: Can you
call Ela and tell her i need to talk urgent please pick my call. Cindy: what
happened now? an other fight :O Harry: please tell her Cindy: MAN! you
guys... am i some kind of a messenger service here? Harry: PLEASEEEEEEEEE ?
Cindy: ok doing.... but thats the last time. Harry: Yes like always:P Cindy:
Hate you seriously man. Harry: Thank you Cindy: Done you can call her now.
example_title: Harry wants to call Ela
- text: >-
Val: it's raining! Candy: I know, just started... Val: r we going? we will
be wet Candy: maybe wait a little? see if stops Val: ok. let's wait half h
and than see Candy: god idea, I call u then Val: great :)
example_title: Val and Candy
datasets:
- DarwinAnim8or/autotrain-data-chatsum
- samsum
co2_eq_emissions:
emissions: 0.16588727515391594
license: apache-2.0
---
# Model Overview
This is a fine-tune of the FLAN-T5 model from Google. This was trained on the "samsum" dataset in order to summarise chat logs.
There are other models sizes available in this same series:
* [ChatSum-Base (248M)](https://huggingface.co/DarwinAnim8or/FLAN-T5-Base-ChatSum)
* [ChatSum-Small (77M)](https://huggingface.co/KoalaAI/ChatSum-Small)
As of writing, there are no larger models planned for this series, with this model being the current best one available in our testing.
## Intended Use
The model is intended to be used for generating summaries of chat logs.
It can be employed in a wide range of applications, including but not limited to chat analysis, conversation summarization, and dialogue-based content generation.
## Training Data
The model has been fine-tuned on the samsum dataset, which contains conversations between two or more participants. The dataset is in English, and each conversation is associated with a summary that captures the main points of the discussion.
## Limitations and Ethical Considerations
As with any language model, the FLAN-T5 model has certain limitations and potential ethical considerations:
1. **Limited Context Understanding**: The model's performance heavily relies on the context provided in the chat logs. It may not fully understand the nuances of the conversation, leading to occasional inaccuracies in the generated summaries.
2. **Biases in Training Data**: The model's fine-tuning data (samsum dataset) may contain biases present in the original data source. This could lead to biased or unfair summaries being generated.
3. **Privacy and Data Security**: If the chat logs used for summarization contain sensitive or private information, using this model may pose privacy risks, and proper data anonymization measures should be taken.
4. **Responsibility in Use**: The model should be used responsibly, and the generated summaries should be carefully analyzed before making any critical decisions based on them.
## Validation Metrics
- Loss: 1.218
- Rouge1: 49.316
- Rouge2: 26.518
- RougeL: 42.229
- RougeLsum: 45.716
- Gen Len: 16.799
## Carbon Emissions
- CO2 Emissions (in grams): 0.1659
|
Inzamam567/Useless_Based_mixes
|
Inzamam567
| 2023-07-30T18:06:41Z | 0 | 1 | null |
[
"anime",
"art",
"en",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-30T18:06:41Z |
---
license: cc-by-4.0
language:
- en
tags:
- anime
- art
duplicated_from: AnonymousM/Based-mixes
---
A model I made anonymously at the start using a Vtuber finetuned model created anonymously along with WarriorMama777's AbyssOrangeMix2 mixes as a starting point for my mixes
https://huggingface.co/WarriorMama777/OrangeMixs#abyssorangemix2_hard-aom2h
the reasons being? Well I like Vtubers and both models had great NSFW capabilites along with me liking the very simple anime look the final epoch for HLL3 has. Luckily it seems like version 4
of the users' model has been uploaded here so I will be providing the link
https://huggingface.co/CluelessC/hll-test/tree/main
and this is the link as well for the 3rd revision of the model I was using up until Based65-final-mix
https://huggingface.co/grugger/chubas/resolve/main/models/mirrors/hll3vtubers-last-pruned.safetensors
Aside from that there's nothing really crazy I have to say about this model that I didn't say on the Civitai upload, which if you want to keep up with what I'm doing I have
a Linktree sharing all my social media platforms
https://linktr.ee/anonymousm
You can use the model however you like just remember to credit me and refer to my CC-License
https://mega.nz/file/qExmQBQA#9eyI78TMEJu8V4c84UWitrlDAjyqxrxSVc1D5ktb87k
If you plan on using any of the based mixes for your own merge... go ahead, just a word of advice the nature of the Based mixes, which I only included the first mix in here
for the sake of archiving it's not that very good at anything compared to the other Based-mixes, V3 up until 65-Final-Mix are all good but using it as a merge there's
something I noticed with model merges, in my recipes especially for Based66-mix, the next entry I am working on, had trouble with getting full accurate details of trained
LORAs like Based65 final mix, the reason for this is due to if more than 2 merged models that contain a fintuned model that isn't NAI is in the mix it can conflict heavily
with LORA outputs. Based64 and 65-proto-mix do not suffer from this due to my knowledge on 2 finetuned models being included with the recipe I used. I plan on researching
more deeply into how models I use in future based mixes are created to avoid this issue. Yet yes with all of that said if you plan on merging my mixes with your own remember
to credit me or whatever, you can do whatever you want with the merge but just remember 65 final mix may conflict with LORAs along with whatever merged finetuned model you
put into your recipe.
*New model added Based66*
Both versions have been uploaded to here with the main goal of making sure LORA compatibility is at its best while ensuring that I use HLL4, the 4th version of the Hololive
Vtuber finetuned model, Version 1 is the first attempt that suffers from anatomy issues, Version 2 achieves the desire for LORA compatibility but adds the need for stronger
weights on prompts to create your desired output, V3 will be worked on to fix these issues in the near future.
|
KoalaAI/ChatSum-Small
|
KoalaAI
| 2023-07-30T18:04:10Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"chat",
"summary",
"en",
"dataset:samsum",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-24T01:07:25Z |
---
license: apache-2.0
widget:
- text: >-
Emily: fancy a drink after work today? Kate: sure! Marta: Good idea!
Marta: Where? When? Emily: Maybe in the Pub X at the central station at
5.30? Kate: I may be closer to 6, traffic on my way Marta: Fine for me.
Marta: See you then, Ladies! Emily: Bye! see ya :* Kate: :*
example_title: Meeting at the Pub
- text: >-
Harry: heyyyy are you there?? Cindy: Yes dear what is it? Harry: Can you
call Ela and tell her i need to talk urgent please pick my call. Cindy: what
happened now? an other fight :O Harry: please tell her Cindy: MAN! you
guys... am i some kind of a messenger service here? Harry: PLEASEEEEEEEEE ?
Cindy: ok doing.... but thats the last time. Harry: Yes like always:P Cindy:
Hate you seriously man. Harry: Thank you Cindy: Done you can call her now.
example_title: Harry wants to call Ela
- text: >-
Val: it's raining! Candy: I know, just started... Val: r we going? we will
be wet Candy: maybe wait a little? see if stops Val: ok. let's wait half h
and than see Candy: god idea, I call u then Val: great :)
example_title: Val and Candy
datasets:
- samsum
language:
- en
tags:
- chat
- summary
---
# Model Overview
This is a fine-tune of the FLAN-T5-Small model from Google. This was trained for 3 epochs on the "samsum" dataset in order to summarise chat logs.
There are other models sizes available in this same series:
* [ChatSum-Large (783M)](https://huggingface.co/KoalaAI/ChatSum-Large)
* [ChatSum-Base (248M)](https://huggingface.co/KoalaAI/ChatSum-Base)
## Intended Use
The model is intended to be used for generating summaries of chat logs.
It can be employed in a wide range of applications, including but not limited to chat analysis, conversation summarization, and dialogue-based content generation.
## Training Data
The model has been fine-tuned on the samsum dataset, which contains conversations between two or more participants. The dataset is in English, and each conversation is associated with a summary that captures the main points of the discussion.
## Limitations and Ethical Considerations
As with any language model, the FLAN-T5-Small model has certain limitations and potential ethical considerations:
1. **Limited Context Understanding**: The model's performance heavily relies on the context provided in the chat logs. It may not fully understand the nuances of the conversation, leading to occasional inaccuracies in the generated summaries.
2. **Biases in Training Data**: The model's fine-tuning data (samsum dataset) may contain biases present in the original data source. This could lead to biased or unfair summaries being generated.
3. **Privacy and Data Security**: If the chat logs used for summarization contain sensitive or private information, using this model may pose privacy risks, and proper data anonymization measures should be taken.
4. **Responsibility in Use**: The model should be used responsibly, and the generated summaries should be carefully analyzed before making any critical decisions based on them.
|
KoalaAI/ChatSum-Base
|
KoalaAI
| 2023-07-30T18:03:14Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"chat",
"summary",
"en",
"dataset:samsum",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-25T17:49:29Z |
---
license: apache-2.0
widget:
- text: >-
Emily: fancy a drink after work today? Kate: sure! Marta: Good idea!
Marta: Where? When? Emily: Maybe in the Pub X at the central station at
5.30? Kate: I may be closer to 6, traffic on my way Marta: Fine for me.
Marta: See you then, Ladies! Emily: Bye! see ya :* Kate: :*
example_title: Meeting at the Pub
- text: >-
Harry: heyyyy are you there?? Cindy: Yes dear what is it? Harry: Can you
call Ela and tell her i need to talk urgent please pick my call. Cindy: what
happened now? an other fight :O Harry: please tell her Cindy: MAN! you
guys... am i some kind of a messenger service here? Harry: PLEASEEEEEEEEE ?
Cindy: ok doing.... but thats the last time. Harry: Yes like always:P Cindy:
Hate you seriously man. Harry: Thank you Cindy: Done you can call her now.
example_title: Harry wants to call Ela
- text: >-
Val: it's raining! Candy: I know, just started... Val: r we going? we will
be wet Candy: maybe wait a little? see if stops Val: ok. let's wait half h
and than see Candy: god idea, I call u then Val: great :)
example_title: Val and Candy
datasets:
- samsum
language:
- en
tags:
- chat
- summary
---
# Model Overview
This is a fine-tune of the FLAN-T5-Base model from Google. This was trained for 3 epochs on the "samsum" dataset in order to summarise chat logs.
There are other models sizes available in this same series:
* [ChatSum-Large (783M)](https://huggingface.co/KoalaAI/ChatSum-Large)
* [ChatSum-Small (77M)](https://huggingface.co/KoalaAI/ChatSum-Small)
## Intended Use
The model is intended to be used for generating summaries of chat logs.
It can be employed in a wide range of applications, including but not limited to chat analysis, conversation summarization, and dialogue-based content generation.
## Training Data
The model has been fine-tuned on the samsum dataset, which contains conversations between two or more participants. The dataset is in English, and each conversation is associated with a summary that captures the main points of the discussion.
## Limitations and Ethical Considerations
As with any language model, the FLAN-T5-Base model has certain limitations and potential ethical considerations:
1. **Limited Context Understanding**: The model's performance heavily relies on the context provided in the chat logs. It may not fully understand the nuances of the conversation, leading to occasional inaccuracies in the generated summaries.
2. **Biases in Training Data**: The model's fine-tuning data (samsum dataset) may contain biases present in the original data source. This could lead to biased or unfair summaries being generated.
3. **Privacy and Data Security**: If the chat logs used for summarization contain sensitive or private information, using this model may pose privacy risks, and proper data anonymization measures should be taken.
4. **Responsibility in Use**: The model should be used responsibly, and the generated summaries should be carefully analyzed before making any critical decisions based on them.
|
TomRB22/pivaenist
|
TomRB22
| 2023-07-30T17:59:05Z | 5 | 1 | null |
[
"music",
"autoencoder",
"variational autoencoder",
"music generation",
"en",
"license:mit",
"region:us"
] | null | 2023-05-07T12:21:26Z |
---
license: mit
language:
- en
tags:
- music
- autoencoder
- variational autoencoder
- music generation
---
# Pivaenist
Pivaenist is a random piano music generator with a VAE architecture.
By the use of the aforementioned autoencoder, it allows the user to encode piano music pieces and to generate new ones.
### Model Description
<figure>
<img src="https://huggingface.co/TomRB22/pivaenist/resolve/main/.images/architecture.png" style="width:100%; display:block; margin:auto">
<figcaption align = "center"><b>Pivaenist's architecture.</b></figcaption>
</figure>
- **Developed by:** TomRB22
- **Model type:** Variational autoencoder
- **License:** MIT
### Sources
**Code:** Some of the code of this repository includes modifications (not the entire code, due to the differences in the architecture) or implementations from the following sites:
1. [TensorFlow. (n.d.). Generate music with an RNN | TensorFlow Core](https://www.tensorflow.org/tutorials/audio/music_generation) - Tensorflow tutorial where pretty-midi is used
2. [Han, X. (2020, September 1). VAE with TensorFlow: 6 Ways](https://towardsdatascience.com/vae-with-tensorflow-6-ways-9c689cb76829) - VAE explanation and code
3. [Li, C. (2019, April 15). Less pain, more gain: A simple method for VAE training with less of that KL-vanishing agony. Microsoft Research.](https://www.microsoft.com/en-us/research/blog/less-pain-more-gain-a-simple-method-for-vae-training-with-less-of-that-kl-vanishing-agony/) - Microsoft article on the KL training schedule which was applied in this model
There might be acknowledgments missing. If you find some other resemblance to a site's code, please notify me and I will make sure of including it.
### Using pivaenist in colab
If you preferred directly using or testing the model without the need to install it, you can use [this colab notebook](https://colab.research.google.com/drive/1VLbykZ1YrVlCg9UtTVjdJcN0u18f-akD?usp=sharing) (stored in this repository as well) and follow its instructions. Moreover, this serves as an example of use.
## Installation
To install the model, you will need to **change your working directory to the desired installation location** and execute the following commands:
**_Windows_**
```console
git clone https://huggingface.co/TomRB22/pivaenist
sudo apt install -y fluidsynth
pip install -r ./pivaenist/requirements.txt
```
**_Mac_**
```console
git clone https://huggingface.co/TomRB22/pivaenist
brew install fluidsynth
pip install -r ./pivaenist/requirements.txt
```
The first one will clone the repository. Then, fluidsynth, a real-time MIDI synthesizer, is also set up in order to be used by the pretty-midi library. With the last line, you will make sure to have all dependencies on your system.
## Training Details
Pivaenist was trained on the midi files of the [MAESTRO v2.0.0 dataset](https://magenta.tensorflow.org/datasets/maestro). Their preprocessing involves splitting each note in pitch, duration and step, which compose a column of a 3xN matrix (which we call song map), where N is the number of notes and a row represents sequentially the different pitches, durations and steps. The VAE's objective is to reconstruct these matrices, making it then possible to generate random maps by sampling from the distribution, and then convert them to a MIDI file.
<figure>
<img src="https://huggingface.co/TomRB22/pivaenist/resolve/main/.images/map_example.png" style="width:30%; display:block; margin:auto">
<figcaption align = "center"><b>A horizontally cropped example of a song map.</b></figcaption>
</figure>
# Documentation
## **_model.VAE_**
### encode
```python
def encode(self, x_input: tf.Tensor) -> tuple[tf.Tensor]:
```
Make a forward pass through the encoder for a given song map, in order to return the latent representation and the distribution's parameters.
Parameters:
* x_input (tf.Tensor): Song map to be encoded by the VAE.
Returns:
* tf.Tensor: The parameters of the distribution which encode the song (mu, sd) and a sampled latent representation from this distribution (z_sample).
### decode
```python
def decode(self, z_sample: tf.Tensor=None) -> tf.Tensor:
```
Decode a latent representation of a song.
Parameters:
* ``z_sample (tf.Tensor)``: Song encoding outputed by the encoder. If None, this sampling is done over an unit Gaussian distribution.
Returns:
* ``tf.Tensor``: Song map corresponding to the encoding.
## **_audio_**
### midi_to_notes
```python
def midi_to_notes(midi_file: str) -> pd.DataFrame:
```
Convert midi file to "song map" (dataframe where each note is broken
into its components)
Parameters:
* ``midi_file (str)``: Path to the midi file.
Returns:
* ``pd.DataFrame``: 3xN matrix where each column is a note, composed of pitch, duration and step.
### display_audio
```python
def display_audio(pm: pretty_midi.PrettyMIDI, seconds=-1) -> display.Audio:
```
Display a song in PrettyMIDI format as a display.Audio object. This method is especially useful in a Jupyter notebook.
Parameters
* ``pm (pretty_midi.PrettyMIDI)``: PrettyMIDI object containing a song.
* ``seconds (int)``: Time fraction of the song to be displayed. When set to -1, the full length is taken.
Returns:
* ``display.Audio``: Song as an object allowing for display.
### notes_to_midi
```python
def notes_to_midi(song_map: pd.DataFrame, out_file: str, velocity: int=50) -> pretty_midi.PrettyMIDI:
```
Convert "song map" to midi file (reverse process with respect to
midi_to_notes) and (optionally) save it, generating a PrettyMidi object in the process.
Parameters:
* ``song_map (pd.DataFrame)``: 3xN matrix where each column is a note, composed of pitch, duration and step.
* ``out_file (str)``: Path or file to write .mid file to. If None, no saving is done.
* ``velocity (int)``: Note loudness, i. e. the hardness a piano key is struck with.
Returns:
* ``pretty_midi.PrettyMIDI``: PrettyMIDI object containing the song's representation.
### generate_and_display
```python
def generate_and_display(model: VAE,
out_file: str=None,
z_sample: tf.Tensor=None,
velocity: int=50,
seconds: int=-1) -> display.Audio:
```
Generate a song, (optionally) save it and display it.
Parameters:
* ``model (VAE)``: Instance of VAE to generate the song with.
* ``out_file (str)``: Path or file to write .mid file to. If None, no saving is done.
* ``z_sample (tf.Tensor)``: Song encoding used to generate a song. If None, perform generate an unconditioned piece.
* ``velocity (int)``: Note loudness, i. e. the hardness a piano key is struck with.
* ``seconds (int)``: Time fraction of the song to be displayed. When set to -1, the full length is taken.
Returns:
* ``display.Audio``: Song as an object allowing for display.
|
ailabturkiye/Valorant_Omen_TR
|
ailabturkiye
| 2023-07-30T17:55:36Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-07-30T16:41:21Z |
---
license: openrail
---
Omen'ın ses modelidir 500 epoch ve 11 dakikalık bir datasetten oluşmaktadır.
Train Benim Tarafımdan yapılmıştır.
Modelin izinsiz bir şekilde [Ai Lab Discord](discord.gg/ailab) Sunucusu dışında paylaşılması tamamen yasaktır, model openrail lisansına sahiptir.
Credits
Herhangi bir platformda model ile yapılan bir cover paylaşımında credits vermeniz rica olunur.
Discord: .hicabi
|
acdg1214/q-Taxi-v3-500x6
|
acdg1214
| 2023-07-30T17:50:22Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-30T17:50:19Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-500x6
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.70
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="acdg1214/q-Taxi-v3-500x6", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
PrakhAI/DigitGAN
|
PrakhAI
| 2023-07-30T17:50:09Z | 0 | 0 | null |
[
"dataset:mnist",
"arxiv:1704.00028",
"license:cc-by-sa-3.0",
"region:us"
] | null | 2023-07-30T14:57:40Z |
---
license: cc-by-sa-3.0
datasets:
- mnist
---
[WGAN-GP](https://arxiv.org/abs/1704.00028) model trained on the [MNIST dataset](https://www.tensorflow.org/datasets/catalog/mnist) using [JAX in Colab](https://colab.research.google.com/drive/1RzQfrc4Xf_pvGJD2PaNJyaURLh0nO4Fp?usp=sharing).
| Real Images | Generated Images |
| ------- | -------- |
|  |  |
# Training Progression
<video width="50%" controls>
<source src="https://cdn-uploads.huggingface.co/production/uploads/649f9483d76ca0fe679011c2/nX7L6xkjvAvaca5pHyTp0.mp4" type="video/mp4">
</video>
# Details
This model is based on [WGAN-GP](https://arxiv.org/abs/1704.00028).
The model was trained for ~9h40m on a GCE VM instance (n1-standard-4, 1 x NVIDIA T4).
The Critic consists of 4 Convolutional Layers with strides for downsampling, and Leaky ReLU activation. The critic does not use Batch Normalization or Dropout.
The Generator consists of 4 Transposed Convolutional Layers with ReLU activation and Batch Normalization.
The learning rate was kept constant at 1e-4 for the first 50,000 steps, which was followed by cosine annealing cycles with a peak LR of 1e-3.
The Lambda (gradient penalty coefficient) used was 10 (same as the original paper).
For more details, please refer to the [Colab Notebook](https://colab.research.google.com/drive/1RzQfrc4Xf_pvGJD2PaNJyaURLh0nO4Fp?usp=sharing).
|
acdg1214/q-FrozenLake-v1-4x4-noSlippery
|
acdg1214
| 2023-07-30T17:42:32Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-30T17:42:29Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="acdg1214/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
yuanzi1983918/q-Taxi-v3
|
yuanzi1983918
| 2023-07-30T17:34:54Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-30T17:34:50Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="yuanzi1983918/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e8_s6789_v3_l4_v50
|
KingKazma
| 2023-07-30T17:23:49Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T17:23:48Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e7_s6789_v3_l4_v50
|
KingKazma
| 2023-07-30T17:15:52Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T17:15:51Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
feic36/xlm-roberta-base-finetuned-panx-de-fr
|
feic36
| 2023-07-30T17:09:48Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-30T16:58:02Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1606
- F1: 0.8620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2873 | 1.0 | 715 | 0.1802 | 0.8245 |
| 0.1446 | 2.0 | 1430 | 0.1601 | 0.8512 |
| 0.0925 | 3.0 | 2145 | 0.1606 | 0.8620 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e6_s6789_v3_l4_v50
|
KingKazma
| 2023-07-30T17:07:53Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T17:07:51Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e5_s6789_v3_l54_v50
|
KingKazma
| 2023-07-30T16:59:36Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T16:59:33Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e4_s6789_v3_l54_v50
|
KingKazma
| 2023-07-30T16:51:30Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T16:51:27Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
ctrltokyo/llm_prompt_mask_fill_model
|
ctrltokyo
| 2023-07-30T16:47:26Z | 62 | 1 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"en",
"dataset:sahil2801/code_instructions_120k",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-29T12:13:23Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: ctrltokyo/llm_prompt_mask_fill_model
results: []
datasets:
- sahil2801/code_instructions_120k
metrics:
- accuracy
language:
- en
widget:
- text: "A web application with a REST API on Rails. This will be used for [MASK]."
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ctrltokyo/llm_prompt_mask_fill_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [code_instructions_120k](https://huggingface.co/datasets/sahil2801/code_instructions_120k) dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.1215
- Validation Loss: 1.5672
- Epoch: 0
## Model description
It's just distilbert-base-uncased with some fine tuning.
## Intended uses & limitations
This model could be used for live autocompletion of PROMPTS in a coding-specific chatbot. Don't try this on code, because it won't work.
## Training and evaluation data
Evaluated on 5% of training data. No further evaluation performed at this point. Trained on NVIDIA V100.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 108, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.1215 | 1.5672 | 0 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.1
- Tokenizers 0.13.3
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e3_s6789_v3_l4_v50
|
KingKazma
| 2023-07-30T16:44:00Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T16:43:59Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e2_s6789_v3_l4_v50
|
KingKazma
| 2023-07-30T16:36:03Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T16:36:01Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
feic36/xlm-roberta-base-finetuned-panx-de
|
feic36
| 2023-07-30T16:35:15Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-30T16:25:41Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8653353814644136
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1339
- F1: 0.8653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2583 | 1.0 | 525 | 0.1596 | 0.8231 |
| 0.1262 | 2.0 | 1050 | 0.1395 | 0.8468 |
| 0.0824 | 3.0 | 1575 | 0.1339 | 0.8653 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
kimetsu/Whisper-Small-TF-TIMIT-FLEUR
|
kimetsu
| 2023-07-30T16:33:39Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-03-29T09:43:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper-Small-TF-TIMIT-FLEUR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper-Small-TF-TIMIT-FLEUR
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8885
- Wer: 35.0461
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.25e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.4965 | 1.27 | 500 | 0.9304 | 37.3857 |
| 0.1668 | 2.54 | 1000 | 0.8561 | 32.7384 |
| 0.069 | 3.81 | 1500 | 0.8093 | 52.7441 |
| 0.0152 | 5.08 | 2000 | 0.9021 | 54.9437 |
| 0.0083 | 6.35 | 2500 | 0.8471 | 57.3611 |
| 0.0021 | 7.61 | 3000 | 0.8885 | 35.0461 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
kimetsu/Whisper-Small-TF-TIMIT
|
kimetsu
| 2023-07-30T16:32:47Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-03-06T16:37:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper-Small-TF-TIMIT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper-Small-TF-TIMIT
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7104
- Wer: 98.0856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3408 | 3.45 | 500 | 0.3994 | 83.6838 |
| 0.2057 | 6.9 | 1000 | 0.4079 | 92.3470 |
| 0.0616 | 10.34 | 1500 | 0.5076 | 94.2053 |
| 0.023 | 13.79 | 2000 | 0.5998 | 95.3184 |
| 0.0043 | 17.24 | 2500 | 0.6825 | 97.1284 |
| 0.0023 | 20.69 | 3000 | 0.7104 | 98.0856 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
kimetsu/Whisper-Small-TF-TIMIT-FLEUR-Normalizado
|
kimetsu
| 2023-07-30T16:31:42Z | 85 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-04-04T16:17:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper-Small-TF-TIMIT-FLEUR-Normalizado
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper-Small-TF-TIMIT-FLEUR-Normalizado
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7395
- Wer: 85.3796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.5923 | 1.27 | 500 | 0.9379 | 98.7612 |
| 0.1823 | 2.54 | 1000 | 0.6721 | 89.3262 |
| 0.0852 | 3.81 | 1500 | 0.6534 | 86.1141 |
| 0.0327 | 5.08 | 2000 | 0.6794 | 84.4019 |
| 0.0106 | 6.35 | 2500 | 0.7170 | 82.5587 |
| 0.0064 | 7.61 | 3000 | 0.7395 | 85.3796 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
efainman/rl_course_vizdoom_health_gathering_supreme
|
efainman
| 2023-07-30T16:28:20Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-30T16:28:15Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.31 +/- 4.54
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r efainman/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e1_s6789_v3_l54_v50
|
KingKazma
| 2023-07-30T16:27:08Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T16:27:05Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e0_s6789_v3_l4_v50
|
KingKazma
| 2023-07-30T16:20:08Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T16:20:06Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e0_s6789_v3_l54_v50
|
KingKazma
| 2023-07-30T16:19:01Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T16:18:58Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
NasimB/switchboard-log-rarity-seed
|
NasimB
| 2023-07-30T16:16:51Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-30T12:54:03Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: switchboard-log-rarity-seed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# switchboard-log-rarity-seed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3582 | 0.29 | 500 | 5.3483 |
| 5.0337 | 0.58 | 1000 | 4.9322 |
| 4.7073 | 0.87 | 1500 | 4.6918 |
| 4.4439 | 1.17 | 2000 | 4.5506 |
| 4.294 | 1.46 | 2500 | 4.4317 |
| 4.187 | 1.75 | 3000 | 4.3272 |
| 4.0815 | 2.04 | 3500 | 4.2480 |
| 3.8891 | 2.33 | 4000 | 4.2093 |
| 3.8568 | 2.62 | 4500 | 4.1546 |
| 3.8319 | 2.92 | 5000 | 4.0999 |
| 3.6392 | 3.21 | 5500 | 4.0964 |
| 3.5919 | 3.5 | 6000 | 4.0644 |
| 3.5614 | 3.79 | 6500 | 4.0333 |
| 3.4752 | 4.08 | 7000 | 4.0305 |
| 3.3114 | 4.37 | 7500 | 4.0258 |
| 3.3071 | 4.66 | 8000 | 4.0137 |
| 3.2911 | 4.96 | 8500 | 3.9998 |
| 3.1578 | 5.25 | 9000 | 4.0124 |
| 3.1306 | 5.54 | 9500 | 4.0113 |
| 3.1228 | 5.83 | 10000 | 4.0107 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e-1_s6789_v3_l54_v50
|
KingKazma
| 2023-07-30T16:11:04Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T16:11:00Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
NasimB/all-guten-merged
|
NasimB
| 2023-07-30T16:08:14Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-30T04:27:02Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: all-guten-merged
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-guten-merged
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.343 | 0.29 | 500 | 5.3343 |
| 5.029 | 0.58 | 1000 | 4.9248 |
| 4.6949 | 0.87 | 1500 | 4.6784 |
| 4.4411 | 1.16 | 2000 | 4.5350 |
| 4.2847 | 1.46 | 2500 | 4.4263 |
| 4.1881 | 1.75 | 3000 | 4.3229 |
| 4.0768 | 2.04 | 3500 | 4.2482 |
| 3.8868 | 2.33 | 4000 | 4.2016 |
| 3.854 | 2.62 | 4500 | 4.1449 |
| 3.8184 | 2.91 | 5000 | 4.0992 |
| 3.6422 | 3.2 | 5500 | 4.0917 |
| 3.5736 | 3.49 | 6000 | 4.0606 |
| 3.5562 | 3.78 | 6500 | 4.0323 |
| 3.4752 | 4.07 | 7000 | 4.0253 |
| 3.3047 | 4.37 | 7500 | 4.0219 |
| 3.3036 | 4.66 | 8000 | 4.0090 |
| 3.291 | 4.95 | 8500 | 3.9985 |
| 3.1484 | 5.24 | 9000 | 4.0090 |
| 3.1239 | 5.53 | 9500 | 4.0082 |
| 3.1224 | 5.82 | 10000 | 4.0074 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
atari713/cartpole-v1
|
atari713
| 2023-07-30T15:58:14Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-30T15:58:05Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
mlabonne/llama-2-13b-guanaco
|
mlabonne
| 2023-07-30T15:57:40Z | 133 | 3 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:timdettmers/openassistant-guanaco",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-30T14:13:37Z |
---
license: apache-2.0
datasets:
- timdettmers/openassistant-guanaco
pipeline_tag: text-generation
---
# Llama-2-13b-guanaco
📝 [Article](https://towardsdatascience.com/fine-tune-your-own-llama-2-model-in-a-colab-notebook-df9823a04a32) |
💻 [Colab](https://colab.research.google.com/drive/1PEQyJO1-f6j0S_XJ8DV50NkpzasXkrzd?usp=sharing) |
📄 [Script](https://gist.github.com/mlabonne/b5718e1b229ce6553564e3f56df72c5c)
<center><img src="https://i.imgur.com/C2x7n2a.png" width="300"></center>
This is a `llama-2-13b-chat-hf` model fine-tuned using QLoRA (4-bit precision) on the [`mlabonne/guanaco-llama2`](https://huggingface.co/datasets/mlabonne/guanaco-llama2) dataset.
## 🔧 Training
It was trained on a Google Colab notebook with a T4 GPU and high RAM.
## 💻 Usage
``` python
# pip install transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/llama-2-13b-miniguanaco"
prompt = "What is a large language model?"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
f'<s>[INST] {prompt} [/INST]',
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
|
o2satz/llama2-qlora-finetunined-nous_medq
|
o2satz
| 2023-07-30T15:49:41Z | 3 | 1 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T15:48:17Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
EllaHong/gildong_summ_exp1
|
EllaHong
| 2023-07-30T15:18:35Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T15:18:26Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
DD0101/disfluency_base_augmented_90_90
|
DD0101
| 2023-07-30T14:55:41Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base",
"base_model:finetune:vinai/phobert-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-19T12:28:35Z |
---
base_model: vinai/phobert-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: disfluency-large-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# disfluency-large-3
This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0403
- Precision: 0.9904
- Recall: 0.9880
- F1: 0.9892
- Accuracy: 0.9962
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 280 | 0.0331 | 0.9719 | 0.9754 | 0.9736 | 0.9926 |
| 0.0853 | 2.0 | 560 | 0.0354 | 0.9771 | 0.9736 | 0.9753 | 0.9923 |
| 0.0853 | 3.0 | 840 | 0.0360 | 0.9759 | 0.9754 | 0.9757 | 0.9928 |
| 0.0119 | 4.0 | 1120 | 0.0255 | 0.9850 | 0.9838 | 0.9844 | 0.9948 |
| 0.0119 | 5.0 | 1400 | 0.0300 | 0.9873 | 0.9850 | 0.9862 | 0.9952 |
| 0.0063 | 6.0 | 1680 | 0.0412 | 0.9848 | 0.9742 | 0.9795 | 0.9927 |
| 0.0063 | 7.0 | 1960 | 0.0304 | 0.9844 | 0.9838 | 0.9841 | 0.9952 |
| 0.0039 | 8.0 | 2240 | 0.0344 | 0.9855 | 0.9820 | 0.9837 | 0.9939 |
| 0.004 | 9.0 | 2520 | 0.0522 | 0.9740 | 0.9681 | 0.9711 | 0.9911 |
| 0.004 | 10.0 | 2800 | 0.0305 | 0.9790 | 0.9790 | 0.9790 | 0.9943 |
| 0.0022 | 11.0 | 3080 | 0.0355 | 0.9837 | 0.9820 | 0.9829 | 0.9945 |
| 0.0022 | 12.0 | 3360 | 0.0400 | 0.9795 | 0.9772 | 0.9783 | 0.9935 |
| 0.002 | 13.0 | 3640 | 0.0394 | 0.9826 | 0.9814 | 0.9820 | 0.9943 |
| 0.002 | 14.0 | 3920 | 0.0452 | 0.9795 | 0.9772 | 0.9783 | 0.9930 |
| 0.0015 | 15.0 | 4200 | 0.0405 | 0.9825 | 0.9808 | 0.9817 | 0.9935 |
| 0.0015 | 16.0 | 4480 | 0.0373 | 0.9832 | 0.9826 | 0.9829 | 0.9941 |
| 0.0013 | 17.0 | 4760 | 0.0361 | 0.9832 | 0.9850 | 0.9841 | 0.9946 |
| 0.0013 | 18.0 | 5040 | 0.0447 | 0.9807 | 0.9790 | 0.9798 | 0.9937 |
| 0.0013 | 19.0 | 5320 | 0.0340 | 0.9874 | 0.9856 | 0.9865 | 0.9955 |
| 0.0009 | 20.0 | 5600 | 0.0374 | 0.9873 | 0.9826 | 0.9849 | 0.9948 |
| 0.0009 | 21.0 | 5880 | 0.0410 | 0.9843 | 0.9784 | 0.9813 | 0.9943 |
| 0.0007 | 22.0 | 6160 | 0.0275 | 0.9892 | 0.9862 | 0.9877 | 0.9961 |
| 0.0007 | 23.0 | 6440 | 0.0360 | 0.9891 | 0.9850 | 0.9871 | 0.9960 |
| 0.0011 | 24.0 | 6720 | 0.0323 | 0.9868 | 0.9850 | 0.9859 | 0.9954 |
| 0.0006 | 25.0 | 7000 | 0.0386 | 0.9867 | 0.9820 | 0.9843 | 0.9949 |
| 0.0006 | 26.0 | 7280 | 0.0408 | 0.9819 | 0.9802 | 0.9811 | 0.9940 |
| 0.0005 | 27.0 | 7560 | 0.0357 | 0.9867 | 0.9826 | 0.9846 | 0.9953 |
| 0.0005 | 28.0 | 7840 | 0.0370 | 0.9843 | 0.9820 | 0.9832 | 0.9946 |
| 0.0004 | 29.0 | 8120 | 0.0313 | 0.9880 | 0.9874 | 0.9877 | 0.9960 |
| 0.0004 | 30.0 | 8400 | 0.0363 | 0.9892 | 0.9862 | 0.9877 | 0.9956 |
| 0.0004 | 31.0 | 8680 | 0.0402 | 0.9843 | 0.9826 | 0.9835 | 0.9946 |
| 0.0004 | 32.0 | 8960 | 0.0321 | 0.9868 | 0.9850 | 0.9859 | 0.9956 |
| 0.0004 | 33.0 | 9240 | 0.0362 | 0.9861 | 0.9838 | 0.9850 | 0.9950 |
| 0.0003 | 34.0 | 9520 | 0.0307 | 0.9886 | 0.9880 | 0.9883 | 0.9964 |
| 0.0003 | 35.0 | 9800 | 0.0350 | 0.9880 | 0.9862 | 0.9871 | 0.9956 |
| 0.0001 | 36.0 | 10080 | 0.0343 | 0.9868 | 0.9856 | 0.9862 | 0.9956 |
| 0.0001 | 37.0 | 10360 | 0.0374 | 0.9874 | 0.9856 | 0.9865 | 0.9952 |
| 0.0003 | 38.0 | 10640 | 0.0333 | 0.9874 | 0.9868 | 0.9871 | 0.9957 |
| 0.0003 | 39.0 | 10920 | 0.0331 | 0.9886 | 0.9862 | 0.9874 | 0.9956 |
| 0.0001 | 40.0 | 11200 | 0.0349 | 0.9880 | 0.9868 | 0.9874 | 0.9961 |
| 0.0001 | 41.0 | 11480 | 0.0407 | 0.9880 | 0.9868 | 0.9874 | 0.9958 |
| 0.0001 | 42.0 | 11760 | 0.0389 | 0.9874 | 0.9868 | 0.9871 | 0.9959 |
| 0.0001 | 43.0 | 12040 | 0.0387 | 0.9892 | 0.9874 | 0.9883 | 0.9961 |
| 0.0001 | 44.0 | 12320 | 0.0414 | 0.9886 | 0.9868 | 0.9877 | 0.9959 |
| 0.0001 | 45.0 | 12600 | 0.0386 | 0.9886 | 0.9868 | 0.9877 | 0.9961 |
| 0.0001 | 46.0 | 12880 | 0.0408 | 0.9892 | 0.9874 | 0.9883 | 0.9961 |
| 0.0 | 47.0 | 13160 | 0.0402 | 0.9898 | 0.9880 | 0.9889 | 0.9962 |
| 0.0 | 48.0 | 13440 | 0.0411 | 0.9886 | 0.9868 | 0.9877 | 0.9959 |
| 0.0 | 49.0 | 13720 | 0.0403 | 0.9904 | 0.9880 | 0.9892 | 0.9962 |
| 0.0 | 50.0 | 14000 | 0.0402 | 0.9904 | 0.9880 | 0.9892 | 0.9962 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
kingbri/airo-llongma-2-13b-16k
|
kingbri
| 2023-07-30T14:55:22Z | 8 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-30T00:27:17Z |
---
language:
- en
---
This is a merge of the below models/LoRAs. Merge was done at a 1:1 ratio.
- [LLongMA-2-13b-16k](https://huggingface.co/conceptofmind/LLongMA-2-13b-16k)
- [airoboros-l2-gpt-1.4.1-13b-PEFT](https://huggingface.co/jondurbin/airoboros-l2-13b-gpt4-1.4.1-peft)
GPTQ quantization is available in a [separate repo](https://huggingface.co/kingbri/airo-llongma-2-13b-16k-GPTQ)
|
ArchitSharma/RLUnit1
|
ArchitSharma
| 2023-07-30T14:33:32Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-28T09:47:55Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 293.61 +/- 12.78
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Izaaaaa/villager
|
Izaaaaa
| 2023-07-30T13:58:54Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-07-30T13:57:42Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jiang-style/llama2-qlora-sft-chinese-tiger-demo
|
jiang-style
| 2023-07-30T13:30:22Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T13:30:16Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
Quiron/AngrA_AnimFlex_v02
|
Quiron
| 2023-07-30T13:26:57Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-30T13:20:14Z |
---
license: creativeml-openrail-m
---
|
theblackcat102/starcoder-1b-evol
|
theblackcat102
| 2023-07-30T13:22:00Z | 134 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt_bigcode",
"text-generation",
"en",
"dataset:theblackcat102/evol-codealpaca-v1",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-30T12:56:21Z |
---
datasets:
- theblackcat102/evol-codealpaca-v1
model-index:
- name: Starcoder-1b-evol
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 35.37
verified: false
- task:
type: text-generation
dataset:
type: mbpp
name: MBPP
metrics:
- name: pass@1
type: pass@1
value: 22.8
verified: false
language:
- en
---
starcoder 1b finetuned on evol-codealpaca-v1
Follows the OpenAssistant chat format:
```
<|prompter|>{user_prompt}<|endoftext|><|assistant|>
```
|
theblackcat102/redpajama-3b-evol-coder
|
theblackcat102
| 2023-07-30T13:20:59Z | 19 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"dataset:theblackcat102/evol-codealpaca-v1",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-30T06:16:21Z |
---
datasets:
- theblackcat102/evol-codealpaca-v1
model-index:
- name: Redpajama-3b-evol-coder
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 20.73
verified: false
- task:
type: text-generation
dataset:
type: mbpp
name: MBPP
metrics:
- name: pass@1
type: pass@1
value: 6.4
verified: false
---
Redpajama 3B finetuned on evol-codealpaca-v1
Follows the OpenAssistant chat format:
```
<|prompter|>{user_prompt}<|endoftext|><|assistant|>
```
|
Sunmin-dev/jungnerd_qa_model
|
Sunmin-dev
| 2023-07-30T12:59:05Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-30T12:50:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: jungnerd_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jungnerd_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6990
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.3885 |
| 2.7128 | 2.0 | 500 | 1.7771 |
| 2.7128 | 3.0 | 750 | 1.6990 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
Technotech/sd-prompt-instruct-3b-epoch-0.4-ggml
|
Technotech
| 2023-07-30T12:54:59Z | 1 | 0 |
transformers
|
[
"transformers",
"llama",
"stable-diffusion",
"instruct",
"magic-prompt",
"natural language inference",
"en",
"dataset:Technotech/sd-prompt-instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-07-30T10:39:26Z |
---
library_name: transformers
license: apache-2.0
datasets:
- Technotech/sd-prompt-instruct
language:
- en
tags:
- stable-diffusion
- instruct
- magic-prompt
- natural language inference
---
# Stable Diffusion Prompt Instruct 3B GGML (OpenLlama v2 3B)
Trained for 0.4 epochs (test) on [Technotech/sd-prompt-instruct](https://huggingface.co/datasets/Technotech/sd-prompt-instruct).
## Prompt Format
```
### Instruction: {prompt}
### Response: {response}
```
## Formats
At the moment, k-quants are not compatible with OpenLlama v2 3B, which this model is fine tuned from.
| Quant | Name | Size |
| ----- | ----- | ----- |
| `q4_0` | `sd-prompt-instruct-ggml.q4_0.bin` | `(1.93 GB)`
| `q4_1` | `sd-prompt-instruct-ggml.q4_1.bin` | `(2.14 GB)`
| `q5_0` | `sd-prompt-instruct-ggml.q5_0.bin` | `(2.36 GB)`
| `q5_1` | `sd-prompt-instruct-ggml.q5_1.bin` | `(2.57 GB)`
|
Qasim30/Reinforce-mycopter
|
Qasim30
| 2023-07-30T12:45:31Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-30T12:12:17Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-mycopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 17.50 +/- 10.12
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jiang-style/llama2-qlora-sft-chinese-chunbing-demo
|
jiang-style
| 2023-07-30T12:42:13Z | 2 | 1 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T07:09:37Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
NasimB/simple_wikipedia-log-rarity-seed
|
NasimB
| 2023-07-30T12:29:42Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-30T08:44:43Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: simple_wikipedia-log-rarity-seed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# simple_wikipedia-log-rarity-seed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1528
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3397 | 0.29 | 500 | 5.3500 |
| 5.0305 | 0.58 | 1000 | 4.9322 |
| 4.7176 | 0.87 | 1500 | 4.7007 |
| 4.4695 | 1.17 | 2000 | 4.5715 |
| 4.3034 | 1.46 | 2500 | 4.4625 |
| 4.2247 | 1.75 | 3000 | 4.3657 |
| 4.1027 | 2.04 | 3500 | 4.3050 |
| 3.9238 | 2.33 | 4000 | 4.2594 |
| 3.8913 | 2.62 | 4500 | 4.2022 |
| 3.8633 | 2.91 | 5000 | 4.1553 |
| 3.6726 | 3.21 | 5500 | 4.1434 |
| 3.6113 | 3.5 | 6000 | 4.1167 |
| 3.6006 | 3.79 | 6500 | 4.0839 |
| 3.5168 | 4.08 | 7000 | 4.0827 |
| 3.3434 | 4.37 | 7500 | 4.0770 |
| 3.3399 | 4.66 | 8000 | 4.0610 |
| 3.3254 | 4.95 | 8500 | 4.0501 |
| 3.1918 | 5.24 | 9000 | 4.0638 |
| 3.1599 | 5.54 | 9500 | 4.0629 |
| 3.1599 | 5.83 | 10000 | 4.0621 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
kroonen/llama2-Q4_0-GGML
|
kroonen
| 2023-07-30T12:21:19Z | 0 | 2 | null |
[
"license:mit",
"region:us"
] | null | 2023-07-22T23:53:01Z |
---
license: mit
---
# Model description
LLAMA-2-Q4_0 GGML (7 and 13b) is a language model trained by Meta AI. This model is based on the original LLAMA-2, but with a couple of key changes. It has been converted to F32 before being quantized to 4 bits. These alterations make the model more efficient in terms of memory and computational requirements, without significantly compromising its language understanding and generation capabilities.
# Intended uses & limitations
## How to use
This model can be used with llama.cpp (or similar) for a variety of natural language understanding and generation tasks. These include, but are not limited to, text completion, text generation, conversation modeling, and semantic similarity estimation.
## Limitations and bias
While this model is designed to understand and generate human-like text, it has a few limitations:
1. It might generate incorrect or nonsensical responses if the input prompt is ambiguous or lacks sufficient context.
2. It is based on the data it was trained on and therefore might reflect the biases present in those data.
3. Despite the conversion and quantization, this model might still require substantial computational resources for large-scale tasks.
# Training data
LLAMA-2-Q4_0 GGML (7 and 13b) model was trained on the same data as the original LLAMA-2. For more details, please refer to the LLAMA-2 model card.
# Evaluations
The performance is similar to that of the original LLAMA-2, with a slight drop due to the quantization process. More specific evaluation results will be added as they become available.
|
undrwolf/custom-PPO-Lunarlander
|
undrwolf
| 2023-07-30T12:11:05Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-30T12:10:59Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -141.82 +/- 84.61
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'undrwolf/custom-PPO-Lunarlander'
'batch_size': 512
'minibatch_size': 128}
```
|
leeminhocat/StrayKids
|
leeminhocat
| 2023-07-30T12:08:53Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-30T12:08:53Z |
---
license: creativeml-openrail-m
---
|
surianto/nana
|
surianto
| 2023-07-30T12:06:16Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-30T12:05:23Z |
---
license: creativeml-openrail-m
---
|
AhmedSSoliman/DistilBERT-Marian-Model-on-DJANGO
|
AhmedSSoliman
| 2023-07-30T12:01:43Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"Code Generation",
"Machine translation",
"Text generation",
"translation",
"en",
"dataset:AhmedSSoliman/DJANGO",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-01-11T21:54:43Z |
---
license: mit
datasets:
- AhmedSSoliman/DJANGO
language:
- en
metrics:
- bleu
- accuracy
pipeline_tag: translation
tags:
- Code Generation
- Machine translation
- Text generation
---
|
AhmedSSoliman/MarianCG-DJANGO
|
AhmedSSoliman
| 2023-07-30T11:58:02Z | 123 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-30T12:14:00Z |
---
widget:
- text: "define the method i with an argument self."
- text: "substitute asvar for self.asvar."
- text: "convert host to lowercase."
- text: "for every var in self.vars,"
- text: "call the method parser.delete_first_token."
---
```
```
[](https://paperswithcode.com/sota/code-generation-on-django?p=mariancg-a-code-generation-transformer-model)
```
```
# MarianCG: a code generation transformer model inspired by machine translation
This model is to improve the solving of the code generation problem and implement a transformer model that can work with high accurate results. We implemented MarianCG transformer model which is a code generation model that can be able to generate code from natural language. This work declares the impact of using Marian machine translation model for solving the problem of code generation. In our implementation, we prove that a machine translation model can be operated and working as a code generation model. Finally, we set the new contributors and state-of-the-art on CoNaLa reaching a BLEU score of 30.92 and Exact Match Accuracy of 6.2 in the code generation problem with CoNaLa dataset.
MarianCG model and its implementation with the code of training and the generated output is available at this repository:
https://github.com/AhmedSSoliman/MarianCG-NL-to-Code
DJANGO dataset is available at
https://huggingface.co/datasets/AhmedSSoliman/DJANGO
This model is avialable on the huggingface hub https://huggingface.co/AhmedSSoliman/MarianCG-DJANGO
```python
# Model and Tokenizer
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# model_name = "AhmedSSoliman/MarianCG-NL-to-Code"
model = AutoModelForSeq2SeqLM.from_pretrained("AhmedSSoliman/MarianCG-DJANGO")
tokenizer = AutoTokenizer.from_pretrained("AhmedSSoliman/MarianCG-DJANGO")
# Input (Natural Language) and Output (Python Code)
NL_input = "define the method i with an argument self."
output = model.generate(**tokenizer(NL_input, padding="max_length", truncation=True, max_length=512, return_tensors="pt"))
output_code = tokenizer.decode(output[0], skip_special_tokens=True)
```
This model is available in spaces using gradio at: https://huggingface.co/spaces/AhmedSSoliman/MarianCG-DJANGO
---
Tasks:
- Translation
- Code Generation
- Text2Text Generation
- Text Generation
---
# Citation
We now have a [paper](https://doi.org/10.1186/s44147-022-00159-4) for this work and you can cite:
```
@article{soliman2022mariancg,
title={MarianCG: a code generation transformer model inspired by machine translation},
author={Soliman, Ahmed S and Hadhoud, Mayada M and Shaheen, Samir I},
journal={Journal of Engineering and Applied Science},
volume={69},
number={1},
pages={1--23},
year={2022},
publisher={SpringerOpen}
url={https://doi.org/10.1186/s44147-022-00159-4}
}
```
|
Q-bert/eartquake-model
|
Q-bert
| 2023-07-30T11:51:03Z | 0 | 0 | null |
[
"tabular-classification",
"license:mit",
"region:us"
] |
tabular-classification
| 2023-07-26T18:04:17Z |
---
license: mit
pipeline_tag: tabular-classification
---
## About the Model
This model has been trained on a substantial dataset and utilizes the Gradient Boosting algorithm. The dataset comprises historical earthquake events along with corresponding geographical information. The model is employed to estimate earthquake probabilities in various regions at the specified date and time.
## Demo
If you want try, you can use here;
[Demo](https://huggingface.co/spaces/Q-bert/EarthQuakeMap)
|
BabaYaga048/ppo-LunarLander-v2
|
BabaYaga048
| 2023-07-30T11:45:51Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-30T11:45:28Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 275.87 +/- 20.79
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sshalini6/whisper-small-5e4-r8-a32-d0.1
|
sshalini6
| 2023-07-30T11:39:54Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T11:39:53Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
nochiantor/CategorAI
|
nochiantor
| 2023-07-30T11:36:11Z | 0 | 1 | null |
[
"license:openrail",
"region:us"
] | null | 2023-07-24T20:27:54Z |
---
license: openrail
---
## About the AI
This AI was initialy designed for a different project i was going to do, but the scope of the project was too large.
However this single AI that was developed for the project, and performed exceptionaly at categorizing.
After a bit of a data set tweak, it has been repurposed to work as a base for an assistant.
with just under 50k parameters, this model should run on any hardware.
## List of included files
train.py: the training function of the model, can be used to train a new model on different data.
main.h5: the model trained on the data.csv file for 2 epoches with a batch size of 1. generorated by train.py
tokenizer.pkl: contain the tokenizer for 5he pre-trained model. genorated by train.py
interact.py: pulls categorizer.h5, and tokenizer.pkl together in a simple text-based assistan (voice in and out put coming soon)
data.csv: the custom dataset this it is trained on
## How to actualy use this model
1. download categorizer.h5, tokenizer.pkl, and interact.py
2. run interact.py
3. ask it to do something, current capabilitys are opening websites, doing google search, and doing math
## Roadmap of features
- voice functionality for interact.py
- finding and reading results from google
- website finder for the open site function
- more category's/functions (taking requests)
|
msladic/rl_course_vizdoom_health_gathering_supreme
|
msladic
| 2023-07-30T11:13:02Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-30T11:10:38Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 13.53 +/- 5.22
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r msladic/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
emaeon/lora-large-healthcare-model-19_desc
|
emaeon
| 2023-07-30T11:08:25Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T11:08:21Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
auhide/chef-gpt-base
|
auhide
| 2023-07-30T11:07:42Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"bg",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-08T14:04:56Z |
---
license: mit
model-index:
- name: chef-gpt-base
results: []
language:
- bg
pipeline_tag: text-generation
widget:
- text: "[ING]1 картоф[REC]"
- text: "[ING]4 бр. яйца[EOL]1 кофичка кисело мляко[EOL]1/4 ч.л. сода[REC]"
---
# chef-gpt-base
GPT-2 architecture trained to generate recipes based on ingredients. [Visit website](https://chef-gpt.streamlit.app/).
## Model description
This is GPT-2 pretrained on a custom dataset of recipies in Bulgarian.
You can find the dataset [here](https://www.kaggle.com/datasets/auhide/bulgarian-recipes-dataset).
## Usage
```python
import re
# Using this library to beautifully print the long recipe string.
from pprint import pprint
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer:
MODEL_ID = "auhide/chef-gpt-base"
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
chef_gpt = AutoModelForCausalLM.from_pretrained(MODEL_ID)
# Prepare the input:
ingredients = [
"1 ч.ч. брашно",
"4 яйца",
"1 кофичка кисело мляко",
"1/4 ч.л. сода",
]
input_text = f"[ING]{'[EOL]'.join(ingredients)}[REC]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
# Generate text:
output = chef_gpt.generate(input_ids, max_length=150)
recipe = tokenizer.batch_decode(output)[0]
# Get the generated recipe - it is up until the 1st [SEP] token.
recipe = re.findall(r"\[REC\](.+?)\[SEP\]", recipe)[0]
print("Съставки/Ingredients:")
pprint(ingredients)
print("\nРецепта/Recipe:")
pprint(recipe)
```
```bash
Съставки/Ingredients:
['1 ч.ч. брашно', '4 яйца', '1 кофичка кисело мляко', '1/4 ч.л. сода']
Рецепта/Recipe:
('В дълбока купа се разбиват яйцата. Добавя се киселото мляко, в което '
'предварително е сложена содата, и се разбива. Добавя се брашното и се омесва '
'тесто. Ако е много гъсто се добавя още малко брашно, ако е много гъсто се '
'добавя още малко брашно. Фурната се загрява предварително на 180С градуса. '
'Когато тестото е готово, се вади от фурната и се разделя на три части.')
```
## Additional tokens
- [ING] - ingredients token; denotes the begining of the tokens representing the ingredients
- [EOL] - end-of-line token; equivalent to a newline
- [REC] - recipe token; denotes the begining of the recipe
|
AliGhiasvand86/digit_recognition2
|
AliGhiasvand86
| 2023-07-30T11:05:29Z | 216 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-30T11:05:22Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: digit_recognition2
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.19801980257034302
---
# digit_recognition2
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### number 1

#### number 2

#### number 3

#### number 4

#### number 5

#### number 6

#### number 7

#### number 8

#### number 9

|
emaeon/lora-large-healthcare-model-14_desc
|
emaeon
| 2023-07-30T11:02:00Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-28T08:09:32Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
intanm/bri_topic_modeling_baseline_30_001
|
intanm
| 2023-07-30T10:59:06Z | 108 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:indobenchmark/indobert-base-p1",
"base_model:finetune:indobenchmark/indobert-base-p1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-30T10:55:15Z |
---
license: mit
base_model: indobenchmark/indobert-base-p1
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bri_topic_modeling_baseline_30_001
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bri_topic_modeling_baseline_30_001
This model is a fine-tuned version of [indobenchmark/indobert-base-p1](https://huggingface.co/indobenchmark/indobert-base-p1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8029
- Accuracy: 0.7748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 223 | 0.9959 | 0.7284 |
| No log | 2.0 | 446 | 0.8029 | 0.7748 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
yukangcao/cartoon_dreambooth
|
yukangcao
| 2023-07-30T10:58:26Z | 32 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:finetune:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-30T10:43:23Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a model of a cartoon
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - RaikkonenCao/cartoon_dreambooth
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a model of a cartoon using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
emaeon/lora-large-healthcare-model-10_desc
|
emaeon
| 2023-07-30T10:56:54Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-28T08:04:24Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
emaeon/lora-large-healthcare-model-4_desc
|
emaeon
| 2023-07-30T10:49:09Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-20T08:24:03Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
emaeon/lora-large-healthcare-model-2_desc
|
emaeon
| 2023-07-30T10:46:35Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-20T07:17:10Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
yukangcao/cat_toy_dreambooth
|
yukangcao
| 2023-07-30T10:42:04Z | 31 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:finetune:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-30T10:28:24Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a photo of cat toy
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - RaikkonenCao/cat_toy_dreambooth
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of cat toy using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e9_s6789_v3_l5_v100
|
KingKazma
| 2023-07-30T10:36:02Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T10:30:45Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e8_s6789_v3_l5_v100
|
KingKazma
| 2023-07-30T10:27:07Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T10:22:23Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
yukangcao/dogs1_dreambooth
|
yukangcao
| 2023-07-30T10:26:56Z | 30 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:finetune:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-30T10:12:51Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a photo of dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - RaikkonenCao/dogs1_dreambooth
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Daniil-plotnikov/russian-vision-v5-1
|
Daniil-plotnikov
| 2023-07-30T10:22:58Z | 29 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"ru",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-29T17:01:58Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
language:
- ru
- en
---
### Russian-Vision-V5.1
Данная модель просто идеально по сравнению с другими! Примеры картинок:
<img src="https://ibb.co/pRNF7jr" alt="." width="1024" height="683">
https://ibb.co/8MwnXJ4
https://ibb.co/W21dfHQ
https://ibb.co/KWcqKjx
https://ibb.co/2dzvg2j
https://ibb.co/yNqhS6x
https://ibb.co/0hCnFBP
https://ibb.co/1sFTZCB
https://ibb.co/hY5KHG6
https://ibb.co/CsVX64L
https://ibb.co/HBr5mZw
https://ibb.co/gFnLbhw
https://ibb.co/CBKfyHZ
https://ibb.co/H4RBJRn
|
TFLai/llama-2-13b-4bit-alpaca-gpt4
|
TFLai
| 2023-07-30T10:21:52Z | 8 | 2 |
peft
|
[
"peft",
"dataset:vicgalle/alpaca-gpt4",
"region:us"
] | null | 2023-07-21T13:37:13Z |
---
library_name: peft
datasets:
- vicgalle/alpaca-gpt4
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
AdiOO7/Azure-tickets-Classifier-llama-2
|
AdiOO7
| 2023-07-30T10:20:09Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T10:20:08Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e7_s6789_v3_l5_v100
|
KingKazma
| 2023-07-30T10:18:13Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T10:14:02Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
yukangcao/dog_dreambooth
|
yukangcao
| 2023-07-30T10:10:39Z | 29 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:finetune:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-30T09:45:44Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - RaikkonenCao/dog_dreambooth
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e6_s6789_v3_l5_v100
|
KingKazma
| 2023-07-30T10:09:19Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T10:05:41Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
StupidTree/llama2-qlora-finetunined-french
|
StupidTree
| 2023-07-30T10:04:52Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T10:04:47Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e4_s6789_v3_l5_v100
|
KingKazma
| 2023-07-30T09:51:31Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-30T09:48:59Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
atari713/dqn-SpaceInvadersNoFrameskip-v4
|
atari713
| 2023-07-30T09:41:37Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-30T09:41:03Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 517.50 +/- 132.69
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga atari713 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga atari713 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga atari713
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.