modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 12:31:00
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 12:28:53
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
YassineKader/whisper-small-haitian
|
YassineKader
| 2023-08-06T22:20:06Z | 95 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:bofenghuang/whisper-small-cv11-french",
"base_model:finetune:bofenghuang/whisper-small-cv11-french",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-06T17:12:23Z |
---
license: apache-2.0
base_model: bofenghuang/whisper-small-cv11-french
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-haitian
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-haitian
This model is a fine-tuned version of [bofenghuang/whisper-small-cv11-french](https://huggingface.co/bofenghuang/whisper-small-cv11-french) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6898
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 0.33 | 3.95 | 1000 | 0.4892 | 1.0 |
| 0.0526 | 7.91 | 2000 | 0.5795 | 1.0 |
| 0.0064 | 11.86 | 3000 | 0.6627 | 1.0 |
| 0.0016 | 15.81 | 4000 | 0.6898 | 1.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.3
- Tokenizers 0.13.3
|
iproskurina/zlata-tinystories
|
iproskurina
| 2023-08-06T22:09:16Z | 144 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:roneneldan/TinyStories",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-03T16:48:59Z |
---
license: apache-2.0
metrics:
- perplexity
model-index:
- name: zlata-tinystories
results: []
datasets:
- roneneldan/TinyStories
language:
- en
widget:
- text: Once upon a time, there was a little bunny named Fluffy. Fluffy loved to play in the garden and eat carrots.
- text: Nina wanted a new bike. Her parents said they would give
- text: Kitty was walking home from school when she came across something strange. She saw a
- text: John was out in the backyard playing. He saw a funny looking insect and
- text: Once upon a time,
library_name: transformers
---
**Small-GPT-2**
A small version of GPT-2 pre-trained on TinyStories dataset.
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster022_partitioned_v3_standardized_022
|
HydraLM
| 2023-08-06T22:08:50Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T17:53:03Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
gioca91/Reinforce-CartPole-v1
|
gioca91
| 2023-08-06T22:02:40Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T20:58:29Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster019_partitioned_v3_standardized_019
|
HydraLM
| 2023-08-06T21:48:12Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T06:20:04Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster018_partitioned_v3_standardized_018
|
HydraLM
| 2023-08-06T21:44:59Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T17:53:01Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster017_partitioned_v3_standardized_017
|
HydraLM
| 2023-08-06T21:42:42Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T17:52:31Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster016_partitioned_v3_standardized_016
|
HydraLM
| 2023-08-06T21:36:47Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T06:20:06Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
Xillolxlbln/my_awesome_qa_model
|
Xillolxlbln
| 2023-08-06T21:33:09Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-04T21:00:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 125 | 3.0587 |
| No log | 2.0 | 250 | 2.1943 |
| No log | 3.0 | 375 | 2.0252 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
nrakocz/distilhubert-finetuned-gtzan
|
nrakocz
| 2023-08-06T21:30:23Z | 158 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-06T19:46:04Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.84
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5565
- Accuracy: 0.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9919 | 1.0 | 113 | 1.8205 | 0.48 |
| 1.3634 | 2.0 | 226 | 1.1723 | 0.68 |
| 0.9779 | 3.0 | 339 | 0.8990 | 0.77 |
| 0.8092 | 4.0 | 452 | 0.8420 | 0.74 |
| 0.7011 | 5.0 | 565 | 0.7290 | 0.79 |
| 0.3831 | 6.0 | 678 | 0.7509 | 0.77 |
| 0.3852 | 7.0 | 791 | 0.6150 | 0.84 |
| 0.1792 | 8.0 | 904 | 0.5968 | 0.82 |
| 0.2193 | 9.0 | 1017 | 0.6058 | 0.82 |
| 0.1887 | 10.0 | 1130 | 0.5565 | 0.84 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
ailabturkiye/sehinsah2
|
ailabturkiye
| 2023-08-06T21:21:49Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-08-06T21:15:04Z |
---
license: openrail
language:
- tr
tags:
- music
---
Şehinşah'ın çıplak sesiyle yapılan ses modeli. Train ve dataset bana aittir.
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster013_partitioned_v3_standardized_013
|
HydraLM
| 2023-08-06T21:16:21Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T17:52:34Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
AmelieSchreiber/esm2_t6_8M_UR50D_sequence_classifier_v1
|
AmelieSchreiber
| 2023-08-06T21:13:59Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"esm",
"text-classification",
"esm-2",
"sequence classifier",
"proteins",
"protein language model",
"zero-shot-classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2023-07-29T18:56:34Z |
---
license: mit
language:
- en
library_name: transformers
tags:
- esm
- esm-2
- sequence classifier
- proteins
- protein language model
pipeline_tag: zero-shot-classification
---
# ESM-2 Sequence Classifier
This is a small sequence classifier trained on synthetic data generated by GPT-4
which classifies protein sequences into three categories `enzymes` (class `0`), `receptor_proteins` (class `1`), and `structural_proteins` (class `2`).
This is trained using [facebook/esm2_t6_8M_UR50D](https://huggingface.co/facebook/esm2_t6_8M_UR50D), one of the [ESM-2 models](https://huggingface.co/docs/transformers/model_doc/esm).
This model is not well tested, and is for experimental and eductaional purposes. Use with caution.
## Using the Model
To use the model, try running:
```python
# Load the trained model and tokenizer
model = EsmForSequenceClassification.from_pretrained("AmelieSchreiber/esm2_t6_8M_UR50D_sequence_classifier_v1")
tokenizer = AutoTokenizer.from_pretrained("facebook/esm2_t6_8M_UR50D")
# Suppose these are your new sequences that you want to classify
# Additional Family 0: Enzymes
new_sequences_0 = [
"ACGYLKTPKLADPPVLRGDSSVTKAICKPDPVLEK",
"GVALDECKALDYLPGKPLPMDGKVCQCGSKTPLRP",
"VLPGYTCGELDCKPGKPLPKCGADKTQVATPFLRG",
"TCGALVQYPSCADPPVLRGSDSSVKACKKLDPQDK",
"GALCEECKLCPGADYKPMDGDRLPAAATSKTRPVG",
"PAVDCKKALVYLPKPLPMDGKVCRGSKTPKTRPYG",
"VLGYTCGALDCKPGKPLPKCGADKTQVATPFLRGA",
"CGALVQYPSCADPPVLRGSDSSVKACKKLDPQDKT",
"ALCEECKLCPGADYKPMDGDRLPAAATSKTRPVGK",
"AVDCKKALVYLPKPLPMDGKVCRGSKTPKTRPYGR",
]
# Additional Family 1: Receptor Proteins
new_sequences_1 = [
"VGQRFYGGRQKNRHCELSPLPSACRGSVQGALYTD",
"KDQVLTVPTYACRCCPKMDSKGRVPSTLRVKSARS",
"PLAGVACGRGLDYRCPRKMVPGDLQVTPATQRPYG",
"CGVRLGYPGCADVPLRGRSSFAPRACMKKDPRVTR",
"RKGVAYLYECRKLRCRADYKPRGMDGRRLPKASTT",
"RPTGAVNCKQAKVYRGLPLPMMGKVPRVCRSRRPY",
"RLDGGYTCGQALDCKPGRKPPKMGCADLKSTVATP",
"LGTCRKLVRYPQCADPPVMGRSSFRPKACCRQDPV",
"RVGYAMCSPKLCSCRADYKPPMGDGDRLPKAATSK",
"QPKAVNCRKAMVYRPKPLPMDKGVPVCRSKRPRPY",
]
# Additional Family 2: Structural Proteins
new_sequences_2 = [
"VGKGFRYGSSQKRYLHCQKSALPPSCRRGKGQGSAT",
"KDPTVMTVGTYSCQCPKQDSRGSVQPTSRVKTSRSK",
"PLVGKACGRSSDYKCPGQMVSGGSKQTPASQRPSYD",
"CGKKLVGYPSSKADVPLQGRSSFSPKACKKDPQMTS",
"RKGVASLYCSSKLSCKAQYSKGMSDGRSPKASSTTS",
"RPKSAASCEQAKSYRSLSLPSMKGKVPSKCSRSKRP",
"RSDVSYTSCSQSKDCKPSKPPKMSGSKDSSTVATPS",
"LSTCSKKVAYPSSKADPPSSGRSSFSMKACKKQDPPV",
"RVGSASSEPKSSCSVQSYSKPSMSGDSSPKASSTSK",
"QPSASNCEKMSSYRPSLPSMSKGVPSSRSKSSPPYQ",
]
# Tokenize the sequences and convert to tensors
# Merge all sequences
new_sequences = new_sequences_0 + new_sequences_1 + new_sequences_2
inputs = tokenizer(new_sequences, return_tensors="pt", padding=True, truncation=True)
# Use the model to get the logits
with torch.no_grad():
logits = model(**inputs).logits
# Get the predicted class for each sequence
predicted_class_ids = torch.argmax(logits, dim=-1)
# Print the predicted class for each sequence
for sequence, predicted_class in zip(new_sequences, predicted_class_ids):
print(f"Sequence: {sequence}, Predicted class: {predicted_class.item()}")
```
|
madebyollin/taesd-x4-upscaler
|
madebyollin
| 2023-08-06T21:13:41Z | 40 | 5 |
diffusers
|
[
"diffusers",
"safetensors",
"license:mit",
"region:us"
] | null | 2023-08-06T19:59:39Z |
---
license: mit
---
# 🍰 Tiny AutoEncoder for Stable Diffusion X4 Upscaler
[`taesd-x4-upscaler`](https://github.com/madebyollin/taesd) is very tiny autoencoder which uses the same "latent API" as [`stable-diffusion-x4-upscaler`](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler)'s VAE.
`taesd-x4-upscaler` is useful for [real-time previewing](https://twitter.com/madebyollin/status/1679356448655163394) of the upsampling process.
This repo contains `.safetensors` versions of the `taesd-x4-upscaler` weights.
## Using in 🧨 diffusers
```python
import requests
from PIL import Image
from io import BytesIO
url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png"
low_res_img = Image.open(BytesIO(requests.get(url).content)).convert("RGB").resize((128, 128))
import torch
from diffusers import StableDiffusionUpscalePipeline, AutoencoderTiny
pipe = StableDiffusionUpscalePipeline.from_pretrained("stabilityai/stable-diffusion-x4-upscaler", torch_dtype=torch.float16)
pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesd-x4-upscaler", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
image = pipe("a white cat", image=low_res_img, num_inference_steps=25).images[0]
image.save("upsampled.png")
```
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster012_partitioned_v3_standardized_012
|
HydraLM
| 2023-08-06T21:11:11Z | 5 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T17:52:36Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
muhtasham/bert-tiny-finetuned-glue-rte
|
muhtasham
| 2023-08-06T21:06:42Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-01T23:42:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-tiny-finetuned-glue-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: rte
split: train
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.631768953068592
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-tiny-finetuned-glue-rte
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6673
- Accuracy: 0.6318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.4294744851376705e-05
- train_batch_size: 64
- eval_batch_size: 16
- seed: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 156 | 0.6852 | 0.5776 |
| No log | 2.0 | 312 | 0.6800 | 0.5993 |
| No log | 3.0 | 468 | 0.6737 | 0.6173 |
| 0.6845 | 4.0 | 624 | 0.6690 | 0.6101 |
| 0.6845 | 5.0 | 780 | 0.6673 | 0.6318 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster011_partitioned_v3_standardized_011
|
HydraLM
| 2023-08-06T21:05:23Z | 7 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T05:53:14Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
simonycl/roberta-large-sst-2-32-13-smoothed
|
simonycl
| 2023-08-06T21:04:21Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-06T20:55:53Z |
---
license: mit
base_model: roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-large-sst-2-32-13-smoothed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-sst-2-32-13-smoothed
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5917
- Accuracy: 0.8906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 75
- label_smoothing_factor: 0.45
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 0.7430 | 0.5 |
| No log | 2.0 | 4 | 0.7414 | 0.5 |
| No log | 3.0 | 6 | 0.7386 | 0.5 |
| No log | 4.0 | 8 | 0.7348 | 0.5 |
| 0.7439 | 5.0 | 10 | 0.7302 | 0.5 |
| 0.7439 | 6.0 | 12 | 0.7248 | 0.5 |
| 0.7439 | 7.0 | 14 | 0.7195 | 0.5 |
| 0.7439 | 8.0 | 16 | 0.7143 | 0.5 |
| 0.7439 | 9.0 | 18 | 0.7082 | 0.5 |
| 0.7171 | 10.0 | 20 | 0.7022 | 0.5 |
| 0.7171 | 11.0 | 22 | 0.6977 | 0.5 |
| 0.7171 | 12.0 | 24 | 0.6954 | 0.5312 |
| 0.7171 | 13.0 | 26 | 0.6936 | 0.5156 |
| 0.7171 | 14.0 | 28 | 0.6926 | 0.5156 |
| 0.7024 | 15.0 | 30 | 0.6922 | 0.5312 |
| 0.7024 | 16.0 | 32 | 0.6921 | 0.5469 |
| 0.7024 | 17.0 | 34 | 0.6927 | 0.5312 |
| 0.7024 | 18.0 | 36 | 0.6938 | 0.5312 |
| 0.7024 | 19.0 | 38 | 0.6958 | 0.5156 |
| 0.6826 | 20.0 | 40 | 0.6982 | 0.5156 |
| 0.6826 | 21.0 | 42 | 0.7138 | 0.5 |
| 0.6826 | 22.0 | 44 | 0.7064 | 0.5312 |
| 0.6826 | 23.0 | 46 | 0.6992 | 0.5625 |
| 0.6826 | 24.0 | 48 | 0.6926 | 0.5625 |
| 0.6474 | 25.0 | 50 | 0.6836 | 0.5781 |
| 0.6474 | 26.0 | 52 | 0.6617 | 0.7344 |
| 0.6474 | 27.0 | 54 | 0.6450 | 0.7656 |
| 0.6474 | 28.0 | 56 | 0.6392 | 0.7812 |
| 0.6474 | 29.0 | 58 | 0.6513 | 0.7344 |
| 0.5878 | 30.0 | 60 | 0.6481 | 0.7812 |
| 0.5878 | 31.0 | 62 | 0.6583 | 0.7969 |
| 0.5878 | 32.0 | 64 | 0.6649 | 0.7812 |
| 0.5878 | 33.0 | 66 | 0.6280 | 0.8125 |
| 0.5878 | 34.0 | 68 | 0.6212 | 0.8594 |
| 0.5602 | 35.0 | 70 | 0.6214 | 0.8281 |
| 0.5602 | 36.0 | 72 | 0.6534 | 0.75 |
| 0.5602 | 37.0 | 74 | 0.6334 | 0.8594 |
| 0.5602 | 38.0 | 76 | 0.6060 | 0.875 |
| 0.5602 | 39.0 | 78 | 0.6048 | 0.875 |
| 0.55 | 40.0 | 80 | 0.6064 | 0.8594 |
| 0.55 | 41.0 | 82 | 0.6095 | 0.8438 |
| 0.55 | 42.0 | 84 | 0.6161 | 0.8438 |
| 0.55 | 43.0 | 86 | 0.6068 | 0.8594 |
| 0.55 | 44.0 | 88 | 0.5929 | 0.875 |
| 0.5425 | 45.0 | 90 | 0.5918 | 0.8906 |
| 0.5425 | 46.0 | 92 | 0.5919 | 0.8906 |
| 0.5425 | 47.0 | 94 | 0.5921 | 0.875 |
| 0.5425 | 48.0 | 96 | 0.5925 | 0.875 |
| 0.5425 | 49.0 | 98 | 0.5970 | 0.8906 |
| 0.5415 | 50.0 | 100 | 0.6128 | 0.8438 |
| 0.5415 | 51.0 | 102 | 0.6187 | 0.8438 |
| 0.5415 | 52.0 | 104 | 0.6012 | 0.8906 |
| 0.5415 | 53.0 | 106 | 0.5981 | 0.8906 |
| 0.5415 | 54.0 | 108 | 0.6085 | 0.8125 |
| 0.5434 | 55.0 | 110 | 0.6028 | 0.8438 |
| 0.5434 | 56.0 | 112 | 0.5970 | 0.8594 |
| 0.5434 | 57.0 | 114 | 0.6013 | 0.8906 |
| 0.5434 | 58.0 | 116 | 0.6023 | 0.8906 |
| 0.5434 | 59.0 | 118 | 0.6002 | 0.8906 |
| 0.5397 | 60.0 | 120 | 0.5964 | 0.8906 |
| 0.5397 | 61.0 | 122 | 0.5940 | 0.8906 |
| 0.5397 | 62.0 | 124 | 0.5934 | 0.8906 |
| 0.5397 | 63.0 | 126 | 0.5936 | 0.8906 |
| 0.5397 | 64.0 | 128 | 0.5936 | 0.8906 |
| 0.5403 | 65.0 | 130 | 0.5939 | 0.8906 |
| 0.5403 | 66.0 | 132 | 0.5939 | 0.8906 |
| 0.5403 | 67.0 | 134 | 0.5933 | 0.8906 |
| 0.5403 | 68.0 | 136 | 0.5933 | 0.8906 |
| 0.5403 | 69.0 | 138 | 0.5934 | 0.8906 |
| 0.5394 | 70.0 | 140 | 0.5931 | 0.8906 |
| 0.5394 | 71.0 | 142 | 0.5926 | 0.8906 |
| 0.5394 | 72.0 | 144 | 0.5921 | 0.8906 |
| 0.5394 | 73.0 | 146 | 0.5919 | 0.8906 |
| 0.5394 | 74.0 | 148 | 0.5918 | 0.8906 |
| 0.5394 | 75.0 | 150 | 0.5917 | 0.8906 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.4.0
- Tokenizers 0.13.3
|
LarryAIDraw/Doria_v1
|
LarryAIDraw
| 2023-08-06T20:59:38Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-06T20:52:22Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/123204/andrea-doria-azur-lane
|
LarryAIDraw/LillySatou
|
LarryAIDraw
| 2023-08-06T20:58:45Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-06T20:51:11Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/123302/lilly-satou-katawa-shoujo
|
LarryAIDraw/swimanis-v1-nai-resize
|
LarryAIDraw
| 2023-08-06T20:58:10Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-06T20:50:07Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/123679/anis-sparkling-summer-nikke
|
LarryAIDraw/HorikitaLora-12
|
LarryAIDraw
| 2023-08-06T20:57:37Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-06T20:49:21Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/123805/suzune-horikita-classroom-of-the-elite-lora
|
estelle1emerson/whisper-small-pt
|
estelle1emerson
| 2023-08-06T20:51:58Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"pt",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-02T00:14:43Z |
---
language:
- pt
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Pt POC
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: pt
split: test[:10%]
args: 'config: pt, split: test'
metrics:
- name: Wer
type: wer
value: 69.33979189092214
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Pt POC
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4973
- Wer: 69.3398
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0035 | 8.77 | 1000 | 0.4042 | 70.8647 |
| 0.0004 | 17.54 | 2000 | 0.4718 | 71.8873 |
| 0.0002 | 26.32 | 3000 | 0.4895 | 70.3265 |
| 0.0002 | 35.09 | 4000 | 0.4973 | 69.3398 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
li-ping/summary_llama_3_epoch_ver2_fix_wavedrom
|
li-ping
| 2023-08-06T20:38:39Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T20:07:37Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster06_partitioned_v3_standardized_06
|
HydraLM
| 2023-08-06T20:36:07Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T17:51:52Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
saaketh-j/llama-business
|
saaketh-j
| 2023-08-06T20:28:10Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T20:26:39Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
prompt = f"""
You are going to determine whether the description includes the business model. Don't use any prior knowledge, only base your answer off of what's given. It might not be explicitly stated but if it says "they sell in retailers" or "they sell to customers", it can be reasonably assumed that a B2C model is stated. If it says they "create software solutions" or "support companies", it is safe to assume they are B2B. If it says they are "the top defense contractor" or that they "create intelligence software for the FBI", it is reasonable to say they are B2G. However, if the information is very sparse or you are unsure, "No business model" is also a category to classify into. You should only classify into B2C, B2B, B2G, No business model. The response should be in sentence form with the class and reasoning ->:
<Description>: [{data_point["Description"]}]
<Answer>: {data_point["Answer"]}
"""
config = LoraConfig(
r=64,
lora_alpha=16,
lora_dropout = 0.1,
bias="none",
task_type = "CAUSAL_LM"
)
|
MattStammers/ppo-lunarlandercontinuous
|
MattStammers
| 2023-08-06T20:27:37Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T19:47:07Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 279.83 +/- 22.33
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
xativive/furdetector
|
xativive
| 2023-08-06T19:58:34Z | 0 | 0 | null |
[
"coreml",
"region:us"
] | null | 2023-08-06T19:44:43Z |
# furdetector
CoreML model meant to classify between furry/not furry images
## Model Description
- **Developed by:** xatitive
- **Model type:** Image Classification
- **Language(s) (NLP):** en
- **License:** cc
|
CristoJV/q-FrozenLake-v1-4x4-noSlippery
|
CristoJV
| 2023-08-06T19:52:20Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T19:52:16Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="CristoJV/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster03_partitioned_v3_standardized_03
|
HydraLM
| 2023-08-06T19:51:03Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T05:46:04Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
alexeynoskov/dqn-SpaceInvadersNoFrameskip-v4
|
alexeynoskov
| 2023-08-06T19:44:46Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T19:44:11Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 652.00 +/- 106.28
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alexeynoskov -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alexeynoskov -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga alexeynoskov
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster02_partitioned_v3_standardized_02
|
HydraLM
| 2023-08-06T19:43:05Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T17:51:59Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster00_partitioned_v3_standardized_00
|
HydraLM
| 2023-08-06T19:23:47Z | 10 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T17:51:50Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster01_partitioned_v3_standardized_01
|
HydraLM
| 2023-08-06T19:13:00Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T05:46:10Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
Bschleter/llama-2-7b-hermes-financecompliance
|
Bschleter
| 2023-08-06T19:11:56Z | 19 | 4 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"finance",
"compliance",
"zero-shot-classification",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2023-08-05T00:59:15Z |
---
language:
- en
pipeline_tag: zero-shot-classification
tags:
- finance
- compliance
---
# Model Card for Model ID
<!--
-->
## Model Details
Based of the full weight llama 2-hermes from Nous Research.
### Model Description
This model was fine tuned off the full weight llama-2-hermes-7B from Nous Research. This model is a preemptive V1, and a hastily put together model to assist
in finance and compliance tasks, mostly tuned to the new SEC Marketing and Compliance rules established in 2021. Later iterations will have more guidelines and rulings
unrelated to the SEC Marketing rule.
https://www.sec.gov/files/rules/final/2020/ia-5653.pdf
<!-- -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [Enlgish]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [llama 2-hermes-7b]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
This is to help companies and individuals within compliance and marketing departments to determine and find issues within their marketing or public facing documents.
Since the new marketing rule is principles based it requires logic, experience, and reasoning to determine if a statement or advertisement would be compliant within
the SEC's new guidelines. This can lead to multiple viewpoints of compliant or not depending on the viewer. Thus this is a small/high quality dataset version
to aid or provide an second viewpoint of a public facing statement to help determine if something is compliant per the SEC's guidelines. The dataset was crafted by
reviewing the SEC Marketing rule, other scenarios, and providing reasoning within the ###n\ Response n\### to help guide the model in reasoning tasks.
Further versions will be reviewed more for accuracy, bias, and more data.
<!-- -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
For use by marketing and compliance finance teams to assist in determination and interpretation of SEC Marketing rule and other SEC interpretations. No outputs should be guaranteed as fact,
and review of data is encouraged. This is to simply assist, and aid those in remembering certain aspects and interpretation of aspects of the long SEC Marketing guidelines
amongst other SEC rulings.
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
This model should not be intended to be used as fact, as evidence/proof in a trial hearing, or be used as indication of innocence in an SEC audit/investigation.
This model should be used by professionals deeply familiar with the SEC's guidelines and compliance procedures.
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
This is the first model iteration, and has not be fully reviewed by multiple professional peers for its accuracy, bias, and output variations.
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. -->
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- -->
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Training Hyperparameters
- <!--# Compute dtype for 4-bit base models
bnb_4bit_compute_dtype = "float16"
bnb_4bit_quant_type = "nf4"
use_nested_quant = False
fp16 = False
bf16 = False - this will be True for next training run.
per_device_train_batch_size = 4
per_device_eval_batch_size = 4
gradient_accumulation_steps = 1
gradient_checkpointing = True
max_grad_norm = 0.3
learning_rate = 2e-5 -1 e-4 for a 13B will be applied.
weight_decay = 0.001
optim = "paged_adamw_32bit"
lr_scheduler_type = "constant"
max_steps = 13000
warmup_ratio = 0.03
group_by_length = True
-->
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Metrics
<!-- -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[Google Colab]
#### Hardware
[1xA100]
|
Robayet2023/esm2_t12_35M_UR50D-finetuned-localization
|
Robayet2023
| 2023-08-06T19:10:45Z | 100 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"esm",
"text-classification",
"generated_from_trainer",
"base_model:facebook/esm2_t12_35M_UR50D",
"base_model:finetune:facebook/esm2_t12_35M_UR50D",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-01T22:55:53Z |
---
license: mit
base_model: facebook/esm2_t12_35M_UR50D
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: esm2_t12_35M_UR50D-finetuned-localization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esm2_t12_35M_UR50D-finetuned-localization
This model is a fine-tuned version of [facebook/esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0331
- Accuracy: 0.4835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.042 | 1.0 | 23758 | 0.0388 | 0.4835 |
| 0.0325 | 2.0 | 47516 | 0.0351 | 0.4835 |
| 0.0259 | 3.0 | 71274 | 0.0331 | 0.4835 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.3
- Tokenizers 0.13.3
|
strnam/instruction-bloom-7b1
|
strnam
| 2023-08-06T18:52:54Z | 8 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T18:52:48Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: True
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
ThuyNT03/xlm-roberta-base-finetuned-panx-en
|
ThuyNT03
| 2023-08-06T18:49:15Z | 89 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-06T18:46:18Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.en
split: validation
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.7034949267192785
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4007
- F1: 0.7035
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 50 | 0.5342 | 0.5693 |
| No log | 2.0 | 100 | 0.4154 | 0.6715 |
| No log | 3.0 | 150 | 0.4007 | 0.7035 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
ThuyNT03/xlm-roberta-base-finetuned-panx-it
|
ThuyNT03
| 2023-08-06T18:46:09Z | 88 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-06T18:42:50Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.it
split: validation
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8199265006124948
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2533
- F1: 0.8199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 70 | 0.3206 | 0.7644 |
| No log | 2.0 | 140 | 0.2674 | 0.8118 |
| No log | 3.0 | 210 | 0.2533 | 0.8199 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
Peniis2/Airplane
|
Peniis2
| 2023-08-06T18:43:04Z | 0 | 0 | null |
[
"en",
"dataset:databricks/databricks-dolly-15k",
"region:us"
] | null | 2023-08-06T18:41:29Z |
---
datasets:
- databricks/databricks-dolly-15k
language:
- en
---
|
ThuyNT03/xlm-roberta-base-finetuned-panx-fr
|
ThuyNT03
| 2023-08-06T18:42:38Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-06T18:37:41Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.fr
split: validation
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8441295546558704
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2787
- F1: 0.8441
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 191 | 0.3171 | 0.7910 |
| No log | 2.0 | 382 | 0.2828 | 0.8081 |
| No log | 3.0 | 573 | 0.2787 | 0.8441 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
ThuyNT03/xlm-roberta-base-finetuned-panx-de-fr
|
ThuyNT03
| 2023-08-06T18:37:02Z | 95 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-06T18:23:38Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1603
- F1: 0.8595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 715 | 0.1777 | 0.8240 |
| No log | 2.0 | 1430 | 0.1603 | 0.8420 |
| No log | 3.0 | 2145 | 0.1603 | 0.8595 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
Lilsunx/llama2-qlora-finetunined-french
|
Lilsunx
| 2023-08-06T18:29:00Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T18:28:52Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
a2zMigrations/free-ost-to-pst-converter
|
a2zMigrations
| 2023-08-06T18:15:09Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-08-06T18:11:41Z |
---
license: openrail
---
A2Z Migrations is a software company known for providing various data migration solutions, including their "<a href="https://www.a2zmigrations.com/ost-to-pst-converter/">Free OST to PST Converter</a>" tool. This utility is designed to facilitate the conversion of OST (Offline Storage Table) files to PST (Personal Storage Table) format.
OST files are utilized by Microsoft Outlook to enable offline access to emails, contacts, calendar items, and other data from an Exchange server. However, there are instances when OST files become inaccessible due to corruption, server changes, or other issues. In such cases, converting OST files to PST format can be beneficial, as PST files are compatible with most versions of Outlook and can be easily imported to access the data.
Here are some key features of A2Z Migrations' Free OST to PST Converter:
1. **User-friendly Interface:** The software is designed with a simple and intuitive interface, making it easy for both technical and non-technical users to operate the tool without any hassle.
2. **Batch Conversion:** The tool allows users to convert multiple OST files to PST format simultaneously, saving time and effort.
3. **Selective Conversion:** Users have the option to select specific OST files or folders for conversion to PST, ensuring that only the required data is processed.
4. **Data Integrity:** During the conversion process, the software maintains the integrity of the data, preserving the original formatting, folder structure, and other properties.
5. **Preview Feature:** Before the actual conversion, the tool provides a preview of the OST data, allowing users to verify and select the items they want to convert.
6. **No File Size Limitation:** The software is designed to handle OST files of any size, ensuring that users can convert even large-sized OST files without any issues.
7. **Compatibility:** A2Z Migrations' OST to PST Converter is compatible with all major versions of Microsoft Outlook, including Outlook 2019, 2016, 2013, and older versions.
8. **Quick Conversion:** The tool employs advanced algorithms to expedite the conversion process, saving users valuable time.
It's important to note that while the "Free OST to PST Converter" by A2Z Migrations offers several useful features at no cost, some advanced functionalities or customer support may be available in their premium versions. Therefore, users who require additional features or professional assistance may opt for the paid version.
Before using any data migration tool, it is recommended to backup your data to avoid any potential loss or corruption during the conversion process. Additionally, ensure that you download such software from reputable sources to minimize security risks and to obtain the most reliable and up-to-date version.
|
HasanErdin/ppo-Huggy
|
HasanErdin
| 2023-08-06T18:14:39Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-06T18:14:34Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: HasanErdin/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ThuyNT03/xlm-roberta-base-finetuned-panx-de
|
ThuyNT03
| 2023-08-06T18:06:14Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-06T17:49:40Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8616659101225601
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1329
- F1: 0.8617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2568 | 1.0 | 525 | 0.1583 | 0.8125 |
| 0.1261 | 2.0 | 1050 | 0.1458 | 0.8473 |
| 0.0823 | 3.0 | 1575 | 0.1329 | 0.8617 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
roa7n/gpt2-human_nontata_promoters-last_2_layer_randomized
|
roa7n
| 2023-08-06T17:39:18Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T17:39:16Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Pauitbid/llama2-qlora-finetunined-french
|
Pauitbid
| 2023-08-06T17:39:01Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T17:38:44Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
DarwinAnim8or/gpt-grug-1.5b
|
DarwinAnim8or
| 2023-08-06T17:09:59Z | 139 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-09T19:43:05Z |
---
license: other
---
Behold, the day of Grug's return is nigh,
When he'll emerge from his cave up high,
With a club in hand and a primal yell,
He'll conquer all foes with his mighty shell.
He'll roam the land, with his tribe in tow,
And strike fear into his every foe,
For he's the king of all the land,
And his reign will be grand.
So let us prepare for Grug's return,
And stock up on berries and meat to earn,
For when he comes, we'll be ready to feast,
And celebrate with a great big feast!
|
ailabturkiye/Lilith
|
ailabturkiye
| 2023-08-06T17:07:29Z | 0 | 0 | null |
[
"diabloV",
"diablo v",
"lilith",
"villain",
"license:openrail",
"region:us"
] | null | 2023-08-06T16:38:09Z |
---
license: openrail
metrics:
- character
tags:
- diabloV
- diablo v
- lilith
- villain
---
Lilith -Diablo V-
Lilith, Diablo V oyununun baş kötü karakteridir, Model 500 Epoch olup s4500 değerindedir.
Modelin TRAIN ve DATASET'i bana aittir. İzinsiz kullanmak yasaktır. İzin alma halinde, paylaşacağınız sosyal medya platformlarında "Cast" kısmında model sahibi belirtilmelidir.
Discord: Alastor#3115
YouTube: https://www.youtube.com/@NahParti
|
ASAHIMM/ASA
|
ASAHIMM
| 2023-08-06T16:58:31Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"aa",
"dataset:fka/awesome-chatgpt-prompts",
"license:openrail",
"region:us"
] | null | 2023-08-06T16:57:28Z |
---
license: openrail
datasets:
- fka/awesome-chatgpt-prompts
language:
- aa
metrics:
- accuracy
library_name: adapter-transformers
---
|
tilyupo/t5-large-trivia-c2a
|
tilyupo
| 2023-08-06T16:34:09Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google/flan-t5-large",
"base_model:finetune:google/flan-t5-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-06T07:33:09Z |
---
license: apache-2.0
base_model: google/flan-t5-large
tags:
- generated_from_keras_callback
model-index:
- name: t5-large-trivia-c2a
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# t5-large-trivia-c2a
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0247
- Validation Loss: 0.0371
- Epoch: 1
<pre>{'eval_loss': 0.5721310377120972,
'eval_bleu': 43.029970392733006,
'eval_rouge1': 52.99,
'eval_rouge2': 25.54,
'eval_rougeL': 53.04,
'eval_rougeLsum': 53.0,
'eval_exact': 0.4820717131474104,
'eval_runtime': 1822.604,
'eval_samples_per_second': 5.646,
'eval_steps_per_second': 0.177}</pre>
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adafactor', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'beta_2_decay': -0.8, 'epsilon_1': 1e-30, 'epsilon_2': 0.001, 'clip_threshold': 1.0, 'relative_step': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1769 | 0.0345 | 0 |
| 0.0247 | 0.0371 | 1 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
adhitya123/llama2-qlora-finetunined-french
|
adhitya123
| 2023-08-06T16:28:10Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T08:45:56Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
jerichosiahaya/ddnb
|
jerichosiahaya
| 2023-08-06T16:24:52Z | 0 | 0 | null |
[
"joblib",
"text-classification",
"naive-bayes",
"region:us"
] |
text-classification
| 2023-08-06T16:12:40Z |
---
tags:
- text-classification
- naive-bayes
---
|
Muhammadreza/mann-e-artistic-2
|
Muhammadreza
| 2023-08-06T16:11:26Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-06T16:07:47Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### mann-e_artistic-2 Dreambooth model trained by Muhammadreza with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
andyP/ro-sentiment-02
|
andyP
| 2023-08-06T16:08:35Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:readerbench/RoBERT-base",
"base_model:finetune:readerbench/RoBERT-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-06T14:26:46Z |
---
base_model: readerbench/RoBERT-base
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: ro-sentiment-02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ro-sentiment-02
This model is a fine-tuned version of [readerbench/RoBERT-base](https://huggingface.co/readerbench/RoBERT-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4093
- Accuracy: 0.8312
- Precision: 0.8488
- Recall: 0.8866
- F1: 0.8673
- F1 Weighted: 0.8298
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.3e-05
- train_batch_size: 96
- eval_batch_size: 192
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.25
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-----------:|
| 0.4289 | 1.0 | 1086 | 0.4168 | 0.8303 | 0.8868 | 0.8570 | 0.8717 | 0.8317 |
| 0.3807 | 2.0 | 2172 | 0.3926 | 0.8424 | 0.8933 | 0.8680 | 0.8804 | 0.8434 |
| 0.3306 | 3.0 | 3258 | 0.4093 | 0.8312 | 0.8488 | 0.8866 | 0.8673 | 0.8298 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
tilyupo/t5-base-trivia-c2a
|
tilyupo
| 2023-08-06T16:02:10Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-04T06:26:15Z |
---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_keras_callback
model-index:
- name: t5-base-trivia-v2-c2a
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# t5-base-trivia-v2-c2a
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0262
- Validation Loss: 0.0442
- Epoch: 2
<pre>{'eval_loss': 0.6880931854248047,
'eval_bleu': 41.64364079630949,
'eval_rouge1': 49.33,
'eval_rouge2': 23.97,
'eval_rougeL': 49.37,
'eval_rougeLsum': 49.34,
'eval_exact': 0.4503935477601788,
'eval_runtime': 571.9059,
'eval_samples_per_second': 17.994,
'eval_steps_per_second': 0.563}</pre>
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adafactor', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'beta_2_decay': -0.8, 'epsilon_1': 1e-30, 'epsilon_2': 0.001, 'clip_threshold': 1.0, 'relative_step': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1841 | 0.0419 | 0 |
| 0.0358 | 0.0415 | 1 |
| 0.0262 | 0.0442 | 2 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.3
- Tokenizers 0.13.3
|
Penisek/mortalcio
|
Penisek
| 2023-08-06T15:49:09Z | 0 | 0 | null |
[
"music",
"pl",
"region:us"
] | null | 2023-08-06T15:44:25Z |
---
language:
- pl
tags:
- music
---
|
tilyupo/t5-small-trivia-c2a
|
tilyupo
| 2023-08-06T15:46:38Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-04T06:40:11Z |
---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_keras_callback
model-index:
- name: t5-small-trivia-v2-c2a
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# t5-small-trivia-v2-c2a
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0310
- Validation Loss: 0.0498
- Epoch: 2
<pre>
{'eval_loss': 0.7987052202224731,
'eval_bleu': 39.12838308579063,
'eval_rouge1': 47.52,
'eval_rouge2': 22.83,
'eval_rougeL': 47.56,
'eval_rougeLsum': 47.54,
'eval_exact': 0.4314449518997182,
'eval_runtime': 171.499,
'eval_samples_per_second': 60.006,
'eval_steps_per_second': 1.878}
</pre>
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adafactor', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'beta_2_decay': -0.8, 'epsilon_1': 1e-30, 'epsilon_2': 0.001, 'clip_threshold': 1.0, 'relative_step': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.2297 | 0.0486 | 0 |
| 0.0414 | 0.0483 | 1 |
| 0.0310 | 0.0498 | 2 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.3
- Tokenizers 0.13.3
|
kartashoffv/vashkontrol-sentiment-rubert
|
kartashoffv
| 2023-08-06T15:44:16Z | 242 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"sentiment",
"ru",
"dataset:kartashoffv/vash_kontrol_reviews",
"base_model:DeepPavlov/rubert-base-cased",
"base_model:finetune:DeepPavlov/rubert-base-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-29T21:10:22Z |
---
base_model: DeepPavlov/rubert-base-cased
tags:
- generated_from_trainer
- sentiment
metrics:
- f1
model-index:
- name: vashkontrol-sentiment-rubert
results: []
license: mit
datasets:
- kartashoffv/vash_kontrol_reviews
language:
- ru
pipeline_tag: text-classification
widget:
- text: "Отзывчивые и понимающие работники, обслуживание очень понравилось, специалист проявила большое терпение чтобы восстановить пароль от Госуслуг. Спасибо!"
---
# Sentimental assessment of portal reviews "VashKontrol"
The model is designed to evaluate the tone of reviews from the [VashKontrol portal](https://vashkontrol.ru/).
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on a following dataset: [kartashoffv/vash_kontrol_reviews](https://huggingface.co/datasets/kartashoffv/vash_kontrol_reviews).
It achieves the following results on the evaluation set:
- Loss: 0.1085
- F1: 0.9461
## Model description
The model predicts a sentiment label (positive, neutral, negative) for a submitted text review.
## Training and evaluation data
The model was trained on the corpus of reviews of the [VashControl portal](https://vashkontrol.ru/), left by users in the period from 2020 to 2022 inclusive.
The total number of reviews was 17,385. The sentimental assessment of the dataset was carried out by the author manually by dividing the general dataset into positive/neutral/negative reviews.
The resulting classes:
0 (positive): 13045
1 (neutral): 1196
2 (negative): 3144
Class weighting was used to solve the class imbalance.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0992 | 1.0 | 1391 | 0.0737 | 0.9337 |
| 0.0585 | 2.0 | 2782 | 0.0616 | 0.9384 |
| 0.0358 | 3.0 | 4173 | 0.0787 | 0.9441 |
| 0.0221 | 4.0 | 5564 | 0.0918 | 0.9488 |
| 0.0106 | 5.0 | 6955 | 0.1085 | 0.9461 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
### Usage
```
import torch
from transformers import AutoModelForSequenceClassification
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('kartashoffv/vashkontrol-sentiment-rubert')
model = AutoModelForSequenceClassification.from_pretrained('kartashoffv/vashkontrol-sentiment-rubert', return_dict=True)
@torch.no_grad()
def predict(review):
inputs = tokenizer(review, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**inputs)
predicted = torch.nn.functional.softmax(outputs.logits, dim=1)
pred_label = torch.argmax(predicted, dim=1).numpy()
return pred_label
```
### Labels
```
0: POSITIVE
1: NEUTRAL
2: NEGATIVE
```
|
MattStammers/Bipedal_Faller_v3
|
MattStammers
| 2023-08-06T15:43:46Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"BipedalWalker-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T15:42:59Z |
---
library_name: stable-baselines3
tags:
- BipedalWalker-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalker-v3
type: BipedalWalker-v3
metrics:
- type: mean_reward
value: -86.71 +/- 3.11
name: mean_reward
verified: false
---
# **PPO** Agent playing **BipedalWalker-v3**
This is a trained model of a **PPO** agent playing **BipedalWalker-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kezif/LunarLander-v2
|
kezif
| 2023-08-06T15:40:26Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T15:40:02Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO/MlpPolicy
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 246.51 +/- 15.74
name: mean_reward
verified: false
---
# **PPO/MlpPolicy** Agent playing **LunarLander-v2**
This is a trained model of a **PPO/MlpPolicy** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
shibal1/anything-v4.5-clone
|
shibal1
| 2023-08-06T15:13:02Z | 296 | 18 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-12T14:41:31Z |
---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
duplicated_from: andite/anything-v4.0
---
[UPDATE (August 6, 2023)]
Hi! It may have seem the original repository I forked from [andite/anything-v4.0] is unavailable for some reason.
The original purpose of this forked repo was to train a model in SD API but didn't work and left this repo up in hopes of trying again but it may seem that
Google search results pointed to this repository instead,
upon further investigation the author of the original repo andite removed their huggingface repo, civitai now only have 4.5 models up
therefore I think this repo now only serves as an archive (unless asked to be taken down ofc).
Steps to access older models (e.g. 4.0)
1. Go to the 'Files and versions' tab
2. Click on the first commit 'Duplicate from andite/anything-v4.0'
3. 'Browse files'
4. ???
5. Profit
-------
Try out my new model! - [Pastel Mix || Stylized Anime Model](https://huggingface.co/andite/pastel-mix). Thanks.
I also uploaded it in CivitAI! https://civitai.com/models/5414/pastel-mix-stylized-anime-model I'd appreciate the ratings, thank you!
Yes, it's a shameless plug.
Examples:



-------
<font color="grey">
[Linaqruf](https://huggingface.co/Linaqruf) for letting me borrow his model card for reference.
# Anything V4
Welcome to Anything V4 - a latent diffusion model for weebs. The newest version of Anything. This model is intended to produce high-quality, highly detailed anime style with just a few prompts. Like other anime-style Stable Diffusion models, it also supports danbooru tags to generate images.
e.g. **_1girl, white hair, golden eyes, beautiful eyes, detail, flower meadow, cumulonimbus clouds, lighting, detailed sky, garden_**
I think the V4.5 version better though, it's in this repo. feel free 2 try it.
## Yes, this model has [AbyssOrangeMix2](https://huggingface.co/WarriorMama777/OrangeMixs) in it. coz its a very good model. check it out luls ;)
# Gradio
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run anything-v4.0:
[](https://huggingface.co/spaces/akhaliq/anything-v4.0)
## 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "andite/anything-v4.0"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "hatsune_miku"
image = pipe(prompt).images[0]
image.save("./hatsune_miku.png")
```
## Examples
Below are some examples of images generated using this model:
**Anime Girl:**

```
masterpiece, best quality, 1girl, white hair, medium hair, cat ears, closed eyes, looking at viewer, :3, cute, scarf, jacket, outdoors, streets
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7
```
**Anime Boy:**

```
1boy, bishounen, casual, indoors, sitting, coffee shop, bokeh
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7
```
**Scenery:**

```
scenery, village, outdoors, sky, clouds
Steps: 50, Sampler: DPM++ 2S a Karras, CFG scale: 7
```
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
## Big Thanks to
- [Linaqruf](https://huggingface.co/Linaqruf). [NoCrypt](https://huggingface.co/NoCrypt), and Fannovel16#9022 for helping me out alot regarding my inquiries and concern about models and other stuff.
|
MattStammers/Bipedal_Walker_v3_Optimised-take1
|
MattStammers
| 2023-08-06T15:06:33Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"BipedalWalker-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T15:04:56Z |
---
library_name: stable-baselines3
tags:
- BipedalWalker-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalker-v3
type: BipedalWalker-v3
metrics:
- type: mean_reward
value: 109.50 +/- 112.23
name: mean_reward
verified: false
---
# **PPO** Agent playing **BipedalWalker-v3**
This is a trained model of a **PPO** agent playing **BipedalWalker-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jelinek/finetuning-sentiment-model
|
jelinek
| 2023-08-06T15:00:05Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-06T14:17:29Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 1.11.0+cu102
- Datasets 2.14.3
- Tokenizers 0.13.3
|
dfalvearg/Reinforce-CartPole
|
dfalvearg
| 2023-08-06T14:59:38Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T14:59:28Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 337.80 +/- 125.87
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
halatmit/ppo-LunarLander-v2
|
halatmit
| 2023-08-06T14:57:18Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T14:56:55Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -135.67 +/- 38.87
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
perfectlybaked/flant5-dolly-QnA
|
perfectlybaked
| 2023-08-06T14:42:22Z | 50 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question-answering",
"en",
"dataset:databricks/databricks-dolly-15k",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-02T07:19:48Z |
---
datasets:
- databricks/databricks-dolly-15k
language:
- en
metrics:
- rouge
pipeline_tag: question-answering
tags:
- question-answering
---
## Description
With the on-set of ChatGPT like products, there is a need of a question-answering model.
Here we have **finetuned Flan-T5** on a question answering dataset, where input is given
as following:
**Context:** Insert context for Q&A
**Input:** Insert query for model.
## Dataset
Model is trained on databricks dolly-15k dataset for Question Answering.
Dataset used for training is 2000 rows and 100 for testing.
|
Jenniferkmc/controlnet-fill-circle
|
Jenniferkmc
| 2023-08-06T14:37:22Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-06T11:53:22Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-Jenniferkmc/controlnet-fill-circle
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.
You can find some example images below.
prompt: red circle with blue background

prompt: cyan circle with brown floral background

|
Davonair/BestBoyNido
|
Davonair
| 2023-08-06T14:36:47Z | 0 | 0 | null |
[
"art",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-08-06T14:22:26Z |
---
license: cc-by-nc-4.0
tags:
- art
---
|
ALHomiOmar/myModel
|
ALHomiOmar
| 2023-08-06T14:33:47Z | 0 | 0 | null |
[
"summarization",
"ar",
"region:us"
] |
summarization
| 2023-08-06T14:10:02Z |
---
language:
- ar
metrics:
- accuracy
pipeline_tag: summarization
---
|
JaiveerGill/fine-tuned-chem-model-final
|
JaiveerGill
| 2023-08-06T14:30:09Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T14:19:43Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
tanmaytekale/chatbot
|
tanmaytekale
| 2023-08-06T14:28:54Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-08-06T14:28:54Z |
---
license: cc-by-nc-sa-4.0
---
|
mrutyunjay-patil/keywordGen-v2
|
mrutyunjay-patil
| 2023-08-06T14:21:43Z | 126 | 2 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"feature-extraction",
"code",
"keyword-generation",
"english",
"text2text-generation",
"en",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-06T10:28:39Z |
---
license: apache-2.0
pipeline_tag: text2text-generation
language:
- en
library_name: transformers
tags:
- code
- keyword-generation
- english
- t5
---
# KeywordGen-v2 Model
KeywordGen-v1 is a T5-based model fine-tuned for keyword generation from a piece of text. Given an input text, the model will return relevant keywords.
## Model Description
This model, "KeywordGen-v2", is the second version of the "KeywordGen" series. It is fine-tuned based on the T5 base model, specifically for the generation of keywords from text inputs, with a special focus on product reviews.
This model can provide useful insights by extracting key points or themes from product reviews. The output is expected to contain keywords ranging from 2 to 8 words. The model performs better when the input is at least 2-3 sentences long.
## How to use
You can use this model directly with a pipeline for text generation. When using the model, please prefix your input with "Keyword: " for the best results.
Here's how to use this model in Python with the Hugging Face Transformers library:
### FOR SINGLE INPUT
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
# Initialize the tokenizer and model
tokenizer = T5Tokenizer.from_pretrained("mrutyunjay-patil/keywordGen-v2")
model = T5ForConditionalGeneration.from_pretrained("mrutyunjay-patil/keywordGen-v2")
# Define your input sequence, prefixing with "Keyword: "
input_sequence = "Keyword: I purchased the new Android smartphone last week and I've been thoroughly impressed. The display is incredibly vibrant and sharp, and the battery life is surprisingly good, easily lasting a full day with heavy usage."
# Encode the input sequence
input_ids = tokenizer.encode(input_sequence, return_tensors="pt")
# Generate output
outputs = model.generate(input_ids)
output_sequence = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(output_sequence)
```
### FOR MULTIPLE INPUT
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
# Initialize the tokenizer and model
tokenizer = T5Tokenizer.from_pretrained("mrutyunjay-patil/keywordGen-v2")
model = T5ForConditionalGeneration.from_pretrained("mrutyunjay-patil/keywordGen-v2")
# Define the prefix
task_prefix = "Keyword: "
# Define your list of input sequences
inputs = [
"Absolutely love this tablet. It has a clear, sharp screen and runs apps smoothly without any hiccups.",
"The headphones are fantastic with great sound quality, but the build quality could be better.",
"Bought this smartwatch last week, and I'm thrilled with its performance. Battery life is impressive.",
"This laptop exceeded my expectations. Excellent speed, plenty of storage, and light weight. Perfect for my needs.",
"The camera quality on this phone is exceptional. It captures detailed and vibrant photos. However, battery life is not the best."
]
# Loop through each input and generate keywords
for sample in inputs:
input_sequence = task_prefix + sample
input_ids = tokenizer.encode(input_sequence, return_tensors="pt")
outputs = model.generate(input_ids)
output_sequence = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(sample, "\n --->", output_sequence)
```
## Training
This model was trained on a custom dataset. The base model used was the T5 base model.
## Limitations and Future Work
As with any machine learning model, the outputs of this keyword generator depend on the data it was trained on. It is possible that the model might generate inappropriate or biased keywords if the input text contains such content. Future iterations of the model will aim to improve its robustness and fairness, and to minimize potential bias.
|
TheRains/cv9-special-batch4-small
|
TheRains
| 2023-08-06T14:14:38Z | 123 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"id",
"dataset:mozilla-foundation/common_voice_9_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-06T02:13:40Z |
---
language:
- id
license: apache-2.0
base_model: openai/whisper-small
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_9_0
metrics:
- wer
model-index:
- name: Whisper Small Indonesian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_9_0 id
type: mozilla-foundation/common_voice_9_0
config: id
split: test
args: id
metrics:
- name: Wer
type: wer
value: 12.431561996779388
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Indonesian
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_9_0 id dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2333
- Wer: 12.4316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3372 | 0.48 | 1000 | 0.2893 | 16.1123 |
| 0.2785 | 0.97 | 2000 | 0.2590 | 14.6032 |
| 0.1318 | 1.45 | 3000 | 0.2535 | 13.8532 |
| 0.1384 | 1.94 | 4000 | 0.2333 | 12.4316 |
| 0.0541 | 2.42 | 5000 | 0.2427 | 12.5650 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Shlomi1/model
|
Shlomi1
| 2023-08-06T14:12:30Z | 31 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-06T11:52:53Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of a stroller
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Shlomi1/model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of a stroller using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
YanJiangJerry/covid-twitter-bert-v2_1_4_2e-05_0.01
|
YanJiangJerry
| 2023-08-06T14:06:17Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:digitalepidemiologylab/covid-twitter-bert-v2",
"base_model:finetune:digitalepidemiologylab/covid-twitter-bert-v2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-06T13:55:29Z |
---
license: mit
base_model: digitalepidemiologylab/covid-twitter-bert-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: covid-twitter-bert-v2_1_4_2e-05_0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid-twitter-bert-v2_1_4_2e-05_0.01
This model is a fine-tuned version of [digitalepidemiologylab/covid-twitter-bert-v2](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1675
- Accuracy: 0.9659
- F1: 0.9117
- Precision: 0.8761
- Recall: 0.9502
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.2014 | 1.0 | 1629 | 0.1675 | 0.9659 | 0.9117 | 0.8761 | 0.9502 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
Hossein69/test1
|
Hossein69
| 2023-08-06T13:57:18Z | 0 | 0 |
keras
|
[
"keras",
"code",
"tabular-classification",
"en",
"dataset:Open-Orca/OpenOrca",
"license:apache-2.0",
"region:us"
] |
tabular-classification
| 2023-08-06T13:54:46Z |
---
license: apache-2.0
datasets:
- Open-Orca/OpenOrca
language:
- en
metrics:
- accuracy
- brier_score
- bertscore
library_name: keras
pipeline_tag: tabular-classification
tags:
- code
---
|
shibal1/hassaku-hentai-SDAPI-upload
|
shibal1
| 2023-08-06T13:51:42Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-06T13:41:54Z |
---
license: creativeml-openrail-m
---
Original Author: https://civitai.com/models/2583?modelVersionId=106922
This repository is created to host models to be uploaded to Stable Diffusion API community models (e.g. Reloading 'hassaku-hentai' to latest revision)
|
hi-august/whisper-large-v2-Japanese-10steps
|
hi-august
| 2023-08-06T13:48:43Z | 2 | 1 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T13:44:20Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
bin-zheng1/sales-LLM
|
bin-zheng1
| 2023-08-06T13:40:47Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T13:40:40Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
loony-user/cnn_news_summary_model_trained_on_reduced_data
|
loony-user
| 2023-08-06T13:40:30Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-06T13:04:13Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: cnn_news_summary_model_trained_on_reduced_data
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
config: 3.0.0
split: train[:3%]
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 0.2184
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cnn_news_summary_model_trained_on_reduced_data
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5909
- Rouge1: 0.2184
- Rouge2: 0.0951
- Rougel: 0.1841
- Rougelsum: 0.1843
- Generated Length: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Generated Length |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------------:|
| No log | 1.0 | 431 | 1.6006 | 0.2181 | 0.0944 | 0.1837 | 0.1838 | 19.0 |
| 1.8083 | 2.0 | 862 | 1.5923 | 0.2187 | 0.0952 | 0.1842 | 0.1845 | 19.0 |
| 1.8004 | 3.0 | 1293 | 1.5909 | 0.2184 | 0.0951 | 0.1841 | 0.1843 | 19.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
SmellyKat/Pyramids-ppo
|
SmellyKat
| 2023-08-06T13:34:04Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-08-06T13:33:57Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: SmellyKat/Pyramids-ppo
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kejolong/nicorobin
|
kejolong
| 2023-08-06T13:31:27Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-06T13:24:34Z |
---
license: creativeml-openrail-m
---
|
gokulk1804/my-pet-cat
|
gokulk1804
| 2023-08-06T13:31:26Z | 17 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-06T13:18:29Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-pet-CAt Dreambooth model trained by gokulk1804 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: AJCE137
Sample pictures of this concept:


|
maazie/EfficientNetB0
|
maazie
| 2023-08-06T13:29:51Z | 135 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"safetensors",
"efficientnet",
"image-classification",
"dataset:imagenet-1k",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-05-20T16:23:48Z |
---
pipeline_tag: image-classification
datasets:
- imagenet-1k
---
This is a EfficientNetB0 model, trained on the ImageNet1k Dataset.
|
YanJiangJerry/bertweet-base_epoch1_batch4_lr2e-05_w0.005
|
YanJiangJerry
| 2023-08-06T13:27:12Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-06T11:58:34Z |
---
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: bertweet-base_epoch1_batch4_lr2e-05_w0.005
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-base_epoch1_batch4_lr2e-05_w0.005
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4254
- Accuracy: 0.8521
- F1: 0.8058
- Precision: 0.7886
- Recall: 0.8239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.5183 | 1.0 | 788 | 0.4254 | 0.8521 | 0.8058 | 0.7886 | 0.8239 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
tiggerhelloworld/q-Taxi-v3
|
tiggerhelloworld
| 2023-08-06T13:26:27Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T13:26:24Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.61
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="tiggerhelloworld/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
abhishek47/Cartpole-reinforce-v1
|
abhishek47
| 2023-08-06T13:24:03Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T13:23:53Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Cartpole-reinforce-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
salohnana2018/ABSA-SentencePair-corrected-domainAdapt-Stack-HARD50-Adapter-pfeiffer-run3
|
salohnana2018
| 2023-08-06T13:19:02Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"pytorch",
"tensorboard",
"bert",
"adapterhub:Arabic ABSA/SemEvalHotelReview",
"dataset:Hotel",
"region:us"
] | null | 2023-08-06T12:36:28Z |
---
tags:
- adapter-transformers
- adapterhub:Arabic ABSA/SemEvalHotelReview
- bert
datasets:
- Hotel
---
# Adapter `salohnana2018/ABSA-SentencePair-corrected-domainAdapt-Stack-HARD50-Adapter-pfeiffer-run3` for CAMeL-Lab/bert-base-arabic-camelbert-msa
An [adapter](https://adapterhub.ml) for the `CAMeL-Lab/bert-base-arabic-camelbert-msa` model that was trained on the [Arabic ABSA/SemEvalHotelReview](https://adapterhub.ml/explore/Arabic ABSA/SemEvalHotelReview/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("CAMeL-Lab/bert-base-arabic-camelbert-msa")
adapter_name = model.load_adapter("salohnana2018/ABSA-SentencePair-corrected-domainAdapt-Stack-HARD50-Adapter-pfeiffer-run3", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
yaya2169/conangray
|
yaya2169
| 2023-08-06T13:17:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-06T13:15:45Z |
250 epoch, 40k sample rate, rvc v2
|
CyberHarem/power_nikke
|
CyberHarem
| 2023-08-06T13:16:20Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/power_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-06T13:10:44Z |
---
license: mit
datasets:
- CyberHarem/power_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of power_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/power_nikke.pt` as the embedding and `1500/power_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `power_nikke`.**
These are available steps:
| Steps | pattern_1 | bikini | free | nude | Download |
|--------:|:----------------------------------------------------|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:---------------------------------|
| 1500 | [<NSFW, click to see>](1500/previews/pattern_1.png) |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/power_nikke.zip) |
| 1400 | [<NSFW, click to see>](1400/previews/pattern_1.png) |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/power_nikke.zip) |
| 1300 | [<NSFW, click to see>](1300/previews/pattern_1.png) |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/power_nikke.zip) |
| 1200 | [<NSFW, click to see>](1200/previews/pattern_1.png) |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/power_nikke.zip) |
| 1100 | [<NSFW, click to see>](1100/previews/pattern_1.png) |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/power_nikke.zip) |
| 1000 | [<NSFW, click to see>](1000/previews/pattern_1.png) |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/power_nikke.zip) |
| 900 | [<NSFW, click to see>](900/previews/pattern_1.png) |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/power_nikke.zip) |
| 800 | [<NSFW, click to see>](800/previews/pattern_1.png) |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/power_nikke.zip) |
| 700 | [<NSFW, click to see>](700/previews/pattern_1.png) |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/power_nikke.zip) |
| 600 | [<NSFW, click to see>](600/previews/pattern_1.png) |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/power_nikke.zip) |
| 500 | [<NSFW, click to see>](500/previews/pattern_1.png) |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/power_nikke.zip) |
| 400 | [<NSFW, click to see>](400/previews/pattern_1.png) |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/power_nikke.zip) |
| 300 | [<NSFW, click to see>](300/previews/pattern_1.png) |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/power_nikke.zip) |
| 200 | [<NSFW, click to see>](200/previews/pattern_1.png) |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/power_nikke.zip) |
| 100 | [<NSFW, click to see>](100/previews/pattern_1.png) |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/power_nikke.zip) |
|
hopkins/eng-deu-trial4
|
hopkins
| 2023-08-06T13:14:57Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-08-05T15:15:47Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-deu-trial4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-deu-trial4
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6328
- Bleu: 21.3888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
TheRains/cv9-special-batch4-tiny
|
TheRains
| 2023-08-06T13:11:46Z | 83 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"id",
"dataset:mozilla-foundation/common_voice_9_0",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-06T05:18:36Z |
---
language:
- id
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_9_0
metrics:
- wer
model-index:
- name: Whisper Small Indonesian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_9_0 id
type: mozilla-foundation/common_voice_9_0
config: id
split: test
args: id
metrics:
- name: Wer
type: wer
value: 32.55118472509777
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Indonesian
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the mozilla-foundation/common_voice_9_0 id dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4997
- Wer: 32.5512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.7055 | 0.48 | 1000 | 0.6329 | 42.1072 |
| 0.5685 | 0.97 | 2000 | 0.5515 | 35.8638 |
| 0.3807 | 1.45 | 3000 | 0.5232 | 34.0189 |
| 0.3766 | 1.94 | 4000 | 0.4993 | 32.6708 |
| 0.3567 | 2.42 | 5000 | 0.4997 | 32.5512 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
chinhon/pegasus-multi_news-headline_57k
|
chinhon
| 2023-08-06T12:52:58Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-14T07:44:00Z |
---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-multi_news-headline_57k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-multi_news-headline_57k
This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4503
- Rouge1: 42.3147
- Rouge2: 23.2213
- Rougel: 35.7441
- Rougelsum: 35.8964
- Gen Len: 33.8245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.6546 | 1.0 | 11339 | 1.5170 | 41.7822 | 22.7843 | 35.3913 | 35.5749 | 34.1139 |
| 1.5132 | 2.0 | 22678 | 1.4602 | 42.0161 | 22.9778 | 35.5357 | 35.6921 | 33.9944 |
| 1.4147 | 3.0 | 34017 | 1.4503 | 42.3147 | 23.2213 | 35.7441 | 35.8964 | 33.8245 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.0
- Tokenizers 0.13.1
|
s3nh/chinese-alpaca-2-7b-GGML
|
s3nh
| 2023-08-06T12:44:54Z | 0 | 7 |
transformers
|
[
"transformers",
"text-generation",
"zh",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-31T07:58:43Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/ziqingyang/chinese-alpaca-2-7b).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card
**This is the full Chinese-Alpaca-2-7B model,which can be loaded directly for inference and full-parameter training.**
**Related models👇**
* Base models
* [Chinese-LLaMA-2-7B (full model)](https://huggingface.co/ziqingyang/chinese-llama-2-7b)
* [Chinese-LLaMA-2-LoRA-7B (LoRA model)](https://huggingface.co/ziqingyang/chinese-llama-2-lora-7b)
* Instruction/Chat models
* [Chinese-Alpaca-2-7B (full model)](https://huggingface.co/ziqingyang/chinese-alpaca-2-7b)
* [Chinese-Alpaca-2-LoRA-7B (LoRA model)](https://huggingface.co/ziqingyang/chinese-alpaca-2-lora-7b)
# Description of Chinese-LLaMA-Alpaca-2
This project is based on the Llama-2, released by Meta, and it is the second generation of the Chinese LLaMA & Alpaca LLM project. We open-source Chinese LLaMA-2 (foundation model) and Alpaca-2 (instruction-following model). These models have been expanded and optimized with Chinese vocabulary beyond the original Llama-2. We used large-scale Chinese data for incremental pre-training, which further improved the fundamental semantic understanding of the Chinese language, resulting in a significant performance improvement compared to the first-generation models. The relevant models support a 4K context and can be expanded up to 18K+ using the NTK method.
The main contents of this project include:
* 🚀 New extended Chinese vocabulary beyond Llama-2, open-sourcing the Chinese LLaMA-2 and Alpaca-2 LLMs.
* 🚀 Open-sourced the pre-training and instruction finetuning (SFT) scripts for further tuning on user's data
* 🚀 Quickly deploy and experience the quantized LLMs on CPU/GPU of personal PC
* 🚀 Support for LLaMA ecosystems like 🤗transformers, llama.cpp, text-generation-webui, LangChain, vLLM etc.
Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for details.
|
nokotin/a2c-PandaReachDense-v2
|
nokotin
| 2023-08-06T12:42:22Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T12:40:06Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.85 +/- 0.23
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CyberHarem/folkwang_nikke
|
CyberHarem
| 2023-08-06T12:35:04Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/folkwang_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-06T12:31:12Z |
---
license: mit
datasets:
- CyberHarem/folkwang_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of folkwang_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/folkwang_nikke.pt` as the embedding and `1500/folkwang_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `folkwang_nikke`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------|
| 1500 |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/folkwang_nikke.zip) |
| 1400 |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/folkwang_nikke.zip) |
| 1300 |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/folkwang_nikke.zip) |
| 1200 |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/folkwang_nikke.zip) |
| 1100 |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/folkwang_nikke.zip) |
| 1000 |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/folkwang_nikke.zip) |
| 900 |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/folkwang_nikke.zip) |
| 800 |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/folkwang_nikke.zip) |
| 700 |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/folkwang_nikke.zip) |
| 600 |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/folkwang_nikke.zip) |
| 500 |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/folkwang_nikke.zip) |
| 400 |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/folkwang_nikke.zip) |
| 300 |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/folkwang_nikke.zip) |
| 200 |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/folkwang_nikke.zip) |
| 100 |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/folkwang_nikke.zip) |
|
voxxer/Lunar_Lander_v2_PPO
|
voxxer
| 2023-08-06T12:16:09Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T12:15:51Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.82 +/- 15.57
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sartmis1/starcoder-finetune-oasst1
|
sartmis1
| 2023-08-06T12:14:00Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"gpt_bigcode",
"en",
"dataset:HuggingFaceH4/oasst1_en",
"base_model:bigcode/starcoder",
"base_model:adapter:bigcode/starcoder",
"region:us"
] | null | 2023-08-04T11:04:01Z |
---
base_model: bigcode/starcoder
model-index:
- name: starcoder-finetune-oasst1
results: []
library_name: peft
datasets:
- HuggingFaceH4/oasst1_en
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
### Model Description
Starcoder Model fine-tuned on HuggingFaceH4/oasst1_en dataset.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.