modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-07 18:30:29
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 544
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-07 18:30:28
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Beniuv/q-FrozenLake-v1-4x4-noSlippery
|
Beniuv
| 2023-07-26T17:47:21Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-26T17:47:19Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Beniuv/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Ai-tensa/FlexWaifu
|
Ai-tensa
| 2023-07-26T17:45:13Z | 0 | 18 | null |
[
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-05-20T06:46:59Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Flex Waifu
FlexWaifu is a fine-tuned model from Waifu Diffusion 1.3 for producing high resolution or different aspect ratio images.
## Model Description
| Model Name | Recipe |
| ---------------- | --------------------------------------------------------------------------------- |
| FlexWaifu | WD1.3 + 2.0 * hires_test_a + smooth_test_a |
| FlexWaifu v1.3.1 | FlexWaifu + 20.0 * IR_1girl1boy_1 + 16.0 * IR_1girl1boy_2 + 16.0 * IR_1girl1boy_3 |
| FlexWaifuRainbow | FlexWaifu + FWRLoRA |
| (Exp) FW TEfixed | WD1.3 + hires_test_d + FW_TEfixed + FW_TEfixed2 |
### FlexWaifu
This model was created by merging two original LoRAs of [testLoRAs](https://huggingface.co/Ai-tensa/testLoRAs) into WD1.3.
It is just a merged model.
While this model is likely to produce good generation at medium resolution, consider using LoRAs of [testLoRAs](https://huggingface.co/Ai-tensa/testLoRAs) if it does not produce well.
#### v1.3.1
The model is fine-tuned with self-generated images with a single word prompt "1girl" or "1boy" and generates well without much prompting.
Twin LoRA has reduced the percentage of bad output without changing the style much.
The images for the three Twin LoRAs were generated by FlexWaifu or a model merging Twin LoRA into it.
3-5k images are used per Twin LORA, no duplicates.
The reward value of ImageReward was used to classify the images.
##### Twin LoRA
When fine-tuning, we create two LoRAs, a good LoRA and a bad LoRA, and take the difference between them in order to suppress adverse effects and achieve the desired effect.
**method**
1. Create a set of images with the same prompt.
2. Divide the image set into two equal parts, good and bad, according to certain evaluation criteria.
3. Create a LoRA for each data set.
4. Subtract the bad LoRA from the good LoRA. (Note: the dimension is twice the original LoRA)
5. Apply differential LoRA in any weight.
### Flex Waifu Rainbow
This model is further fine-tuned from FlexWaifu with ~17k nijijourneyv5 tagged images of various authors published on the Internet.
It is merged from six dim 8 LoRAs made in various settings, and FWRLoRA is the merged LoRA (dim 48).
Most LoRAs were fine-tuned with Aspect Ratio Backetting with a maximum resolution of 1152x768 images, but some are up to 768x768 or 512x768.
Image captions are made by BLIP and ~12k images also used WD1.4-tagger.
### Usage
The format of the caption suggests that a short natural language sentence followed by a comma-separated tags is the most natural way to describe the prompt.
Using more tags that are well-estimated by the tagger in the trained images may lead to more preferable generation.
"tags.json" lists the tags estimated for over 200 of the 12k images.
Tag semantics may be inappropriate for automatic tagging, so please emphasize appropriately.
CLIP Skip 1 is recommended.
### (Experimental) Flex Waifu TEfixed
This is an experimental fine-tuned model based on WD1.3 with TE fixed; it is not based on FW1.3.1. Use whichever you prefer.
Fixing TE is intended to avoid degradation of language understanding during fine-tuning, but it may have been under-trained because it is slower learning.
The dataset are [NijiJourney-Prompt-Pairs]([/NijiJourney-Prompt-Pairs](https://huggingface.co/datasets/Korakoe/NijiJourney-Prompt-Pairs)) and AI Illustration tag images including the past datasets, for a total of 370k images.
We also provide a TEfixed version of FWRLoRA. The dataset consists of 19k nijijourneyv5 tag images, including the dataset of the previous FWRLoRA. There are two versions: FWR_TEfixed, which has a low learning rate, and FWR_TEfixed2, which has a high learning rate.
**TE fixed version does not require underscores in the prompts, because this model was trained with captions without underscores.**
**Examples**

```
solo, 1girl, full body, white background, indoors, from side, short hair, bangs, long sleeves, closed mouth, white hair, jewelry, blush, standing, earrings dress, flower, shirt, blue eyes, hair ornament, skirt, bow
Negative prompt: 2girls, worst quality, low quality, lowres, blurry, bad anatomy, text, frame
Steps: 30, Sampler: UniPC, CFG scale: 7, Seed: 2772485736, Size: 768x768, Model hash: 82f8443e2b, Model: wdAI_test_FW_TEfixed, Denoising strength: 0.6, Version: v1.2.1, Hires upscale: 1.5, Hires steps: 15, Hires upscaler: Latent, AddNet Enabled: True, AddNet Module 1: LoRA, AddNet Model 1: FWR_TEfixed(8d97f2dd88d6), AddNet Weight A 1: -2.0, AddNet Weight B 1: -2.0, Script: X/Y/Z plot, X Type: AddNet Weight 1, X Values: "-2, -1, 0, 1, 2", Y Type: AddNet Model 1, Y Values: "FWR_TEfixed(8d97f2dd88d6),FWR_TEfixed2(539136a8cf23)"
```

```
solo, 1girl, looking at viewer, long hair, full body, bangs, long sleeves, closed mouth, black hair, jewelry, blush, standing, smile, earrings, outdoors, dress, flower, shirt, blue eyes, hair ornament, sky
Negative prompt: 2girls, worst quality, low quality, lowres, blurry, bad anatomy, text, frame
Steps: 30, Sampler: UniPC, CFG scale: 7, Seed: 3438137896, Size: 768x768, Model hash: 82f8443e2b, Model: wdAI_test_FW_TEfixed, Denoising strength: 0.6, Version: v1.2.1, Hires upscale: 1.5, Hires steps: 15, Hires upscaler: Latent, AddNet Enabled: True, AddNet Module 1: LoRA, AddNet Model 1: FWR_TEfixed(8d97f2dd88d6), AddNet Weight A 1: -2.0, AddNet Weight B 1: -2.0, Script: X/Y/Z plot, X Type: AddNet Weight 1, X Values: "-2, -1, 0, 1, 2", Y Type: AddNet Model 1, Y Values: "FWR_TEfixed(8d97f2dd88d6),FWR_TEfixed2(539136a8cf23)"
```
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
## Acknowledgements
These Models build on the two excellent works: SD1.4, developed by [CompVis Researchers](https://ommer-lab.com/), and WD1.3, developed by [Anthony Mercurio](https://github.com/harubaru), [Salt](https://github.com/sALTaccount/), and [Cafe](https://twitter.com/cafeai_labs).
## Examples
**CLIP Skip 1 is recommended.**
### Flex Waifu v1.3.1
**Prompt 1**
- with Negative Prompt

- without Negative Prompt

```
solo, 1girl, white_background, full_body, twintails, braid, white_background, bangs, frills, closed_mouth, brown_hair, jewelry, blush, standing, dress, food, strawberry, (blueberry:1.1), (cake:1.1), sweets, brown_eyes, hair_ornament, skirt, bow
Steps: 30, Sampler: UniPC, CFG scale: 7, Seed: 1443377636, Size: 768x768, Model hash: 7b8c719863, Model: FlexWaifu_FlexWaifu1.3.1, Denoising strength: 0.6, Version: v1.2.1, Hires upscale: 1.5, Hires steps: 18, Hires upscaler: Latent
```
**Prompt 2**
- with Negative Prompt

- without Negative Prompt

```
solo, 1girl, looking_at_viewer, long_hair, full_body, bangs, long_sleeves, closed_mouth, black_hair, jewelry, blush, standing, smile, earrings, outdoors, dress, flower, shirt, blue_eyes, hair_ornament, sky
Negative prompt: 2girls, worst quality, low quality, lowres, blurry, bad anatomy, text, frame
Steps: 18, Sampler: UniPC, CFG scale: 7, Seed: 4292874896, Size: 768x768, Model hash: 7b8c719863, Model: FlexWaifu_FlexWaifu1.3.1, Denoising strength: 0.6, Version: v1.2.1, Hires upscale: 1.5, Hires steps: 18, Hires upscaler: Latent
```
**Prompt 3**
- with Negative Prompt

- without Negative Prompt

```
solo, 1girl, full_body, white_background, indoors, from_side, short_hair, bangs, long_sleeves, closed_mouth, white_hair, jewelry, blush, standing, earrings dress, flower, shirt, blue_eyes, hair_ornament, skirt, bow
Negative prompt: 2girls, worst quality, low quality, lowres, blurry, bad anatomy, text, frame
Steps: 18, Sampler: UniPC, CFG scale: 7, Seed: 225005778, Size: 768x768, Model hash: 7b8c719863, Model: FlexWaifu_FlexWaifu1.3.1, Denoising strength: 0.6, Version: v1.2.1, Hires upscale: 1.5, Hires steps: 18, Hires upscaler: Latent
```
**Prompt 4**
- with Negative Prompt

- without Negative Prompt

```
a woman holding jar is standing near kitchen, 1girl, solo, long_hair, breasts, looking_at_viewer, blush, smile, bangs, black_hair, hair_ornament, dress, holding, bare_shoulders, closed_mouth, standing, collarbone, ponytail, braid, sidelocks, day, indoors, hair_over_shoulder, white_dress, window, shelf, sunlight, cup, bottle, holding_cup, mug, plant, potted_plant
Negative prompt: 2girls, worst quality, low quality, lowres, blurry, bad anatomy, text, frame
Steps: 18, Sampler: UniPC, CFG scale: 7, Seed: 3916047400, Size: 768x768, Model hash: 7b8c719863, Model: FlexWaifu_FlexWaifu1.3.1, Denoising strength: 0.6, Version: v1.2.1, Hires upscale: 1.5, Hires steps: 18, Hires upscaler: Latent
```
### Flex Waifu Rainbow
**Prompt 1**


```
solo, 1girl, white_background, full_body, twintails, braid, white_background, bangs, frills, closed_mouth, brown_hair, jewelry, blush, standing, dress, food, strawberry, (blueberry:1.1), (cake:1.1), sweets, brown_eyes, hair_ornament, skirt, bow
Negative prompt: 2girls, worst quality, low quality, lowres, blurry, bad anatomy, text, frame
Steps: 30, Sampler: UniPC, CFG scale: 7, Seed: 3282638012, Size: 768x768, Model hash: 3f709dac23, Model: models_FlexWaifu_FlexWaifuRainbow, Denoising strength: 0.6, Hires upscale: 1.5, Hires steps: 18, Hires upscaler: Latent
```
**Prompt 2**

```
solo, 1girl, looking_at_viewer, long_hair, full_body, bangs, long_sleeves, closed_mouth, black_hair, jewelry, blush, standing, smile, earrings, outdoors, dress, flower, shirt, blue_eyes, hair_ornament, sky
Negative prompt: 2girls, worst quality, low quality, lowres, blurry, bad anatomy, text, frame
Steps: 30, Sampler: UniPC, CFG scale: 7, Seed: 4292874896, Size: 768x768, Model hash: 3f709dac23, Model: models_FlexWaifu_FlexWaifuRainbow, Denoising strength: 0.6, Hires upscale: 1.5, Hires steps: 18, Hires upscaler: Latent
```
**Prompt 3**

```
solo, 1girl, full_body, white_background, indoors, from_side, short_hair, bangs, long_sleeves, closed_mouth, white_hair, jewelry, blush, standing, earrings dress, flower, shirt, blue_eyes, hair_ornament, skirt, bow
Negative prompt: 2girls, worst quality, low quality, lowres, blurry, bad anatomy, text, frame
Steps: 30, Sampler: UniPC, CFG scale: 7, Seed: 225005778, Size: 768x768, Model hash: 3f709dac23, Model: models_FlexWaifu_FlexWaifuRainbow, Denoising strength: 0.6, Hires upscale: 1.5, Hires steps: 18, Hires upscaler: Latent
```
**Prompt 4**
- CLIP Skip 1 (recommended)

```
a woman holding jar is standing near kitchen, 1girl, solo, long_hair, breasts, looking_at_viewer, blush, smile, bangs, black_hair, hair_ornament, dress, holding, bare_shoulders, closed_mouth, standing, collarbone, ponytail, braid, sidelocks, day, indoors, hair_over_shoulder, white_dress, window, shelf, sunlight, cup, bottle, holding_cup, mug, plant, potted_plant
Negative prompt: 2girls, worst quality, low quality, lowres, blurry, bad anatomy, text, frame
Steps: 30, Sampler: DPM++ 2M, CFG scale: 7, Seed: 3916047400, Size: 768x768, Model hash: 3f709dac23, Model: models_FlexWaifu_FlexWaifuRainbow, Denoising strength: 0.6, Hires upscale: 1.5, Hires steps: 18, Hires upscaler: Latent
```
- CLIP SKip 2

```
a woman holding jar is standing near kitchen, 1girl, solo, long_hair, breasts, looking_at_viewer, blush, smile, bangs, black_hair, hair_ornament, dress, holding, bare_shoulders, closed_mouth, standing, collarbone, ponytail, braid, sidelocks, day, indoors, hair_over_shoulder, white_dress, window, shelf, sunlight, cup, bottle, holding_cup, mug, plant, potted_plant
Negative prompt: 2girls, worst quality, low quality, lowres, blurry, bad anatomy, text, frame
Steps: 30, Sampler: DPM++ 2M, CFG scale: 7, Seed: 3156421384, Size: 768x768, Model hash: 3f709dac23, Model: models_FlexWaifu_FlexWaifuRainbow, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 1.5, Hires steps: 18, Hires upscaler: Latent
```
|
wuru330/378A1_results_2
|
wuru330
| 2023-07-26T17:40:27Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-26T16:54:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 378A1_results_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 378A1_results_2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5007
- Accuracy: 0.8861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2051 | 1.0 | 37 | 1.0438 | 0.6429 |
| 0.6643 | 2.0 | 74 | 0.6103 | 0.7925 |
| 0.4615 | 3.0 | 111 | 0.4720 | 0.8435 |
| 0.3136 | 4.0 | 148 | 0.3668 | 0.8776 |
| 0.278 | 5.0 | 185 | 0.3650 | 0.8622 |
| 0.1875 | 6.0 | 222 | 0.3705 | 0.8690 |
| 0.1215 | 7.0 | 259 | 0.4093 | 0.8741 |
| 0.0885 | 8.0 | 296 | 0.3428 | 0.9014 |
| 0.0497 | 9.0 | 333 | 0.3854 | 0.8759 |
| 0.0348 | 10.0 | 370 | 0.4291 | 0.8707 |
| 0.0301 | 11.0 | 407 | 0.4464 | 0.8895 |
| 0.0246 | 12.0 | 444 | 0.4208 | 0.8929 |
| 0.0218 | 13.0 | 481 | 0.4256 | 0.8912 |
| 0.0198 | 14.0 | 518 | 0.4300 | 0.8878 |
| 0.0179 | 15.0 | 555 | 0.4403 | 0.8861 |
| 0.0165 | 16.0 | 592 | 0.4481 | 0.8861 |
| 0.0155 | 17.0 | 629 | 0.4554 | 0.8878 |
| 0.0146 | 18.0 | 666 | 0.4632 | 0.8878 |
| 0.0137 | 19.0 | 703 | 0.4691 | 0.8844 |
| 0.0129 | 20.0 | 740 | 0.4747 | 0.8861 |
| 0.0125 | 21.0 | 777 | 0.4792 | 0.8844 |
| 0.0119 | 22.0 | 814 | 0.4840 | 0.8844 |
| 0.0113 | 23.0 | 851 | 0.4875 | 0.8861 |
| 0.0111 | 24.0 | 888 | 0.4924 | 0.8844 |
| 0.0108 | 25.0 | 925 | 0.4947 | 0.8844 |
| 0.0105 | 26.0 | 962 | 0.4966 | 0.8844 |
| 0.0104 | 27.0 | 999 | 0.4988 | 0.8861 |
| 0.0102 | 28.0 | 1036 | 0.4997 | 0.8861 |
| 0.0101 | 29.0 | 1073 | 0.5005 | 0.8861 |
| 0.01 | 30.0 | 1110 | 0.5007 | 0.8861 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
samarla/RoBERTa-base-cola
|
samarla
| 2023-07-26T17:37:48Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-26T15:07:55Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_keras_callback
model-index:
- name: samarla/RoBERTa-base-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# samarla/RoBERTa-base-cola
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4967
- Validation Loss: nan
- Train Matthews Correlation: 0.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2670, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.4967 | nan | 0.0 | 0 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.0
- Tokenizers 0.13.3
|
Showroom/clothing_general_category
|
Showroom
| 2023-07-26T17:24:12Z | 107 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"text-classification",
"autotrain",
"en",
"dataset:Showroom/autotrain-data-clothing_general_category",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-26T17:22:31Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- Showroom/autotrain-data-clothing_general_category
co2_eq_emissions:
emissions: 0.581478668947555
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 77598140607
- CO2 Emissions (in grams): 0.5815
## Validation Metrics
- Loss: 0.188
- Accuracy: 0.963
- Macro F1: 0.969
- Micro F1: 0.963
- Weighted F1: 0.962
- Macro Precision: 0.967
- Micro Precision: 0.963
- Weighted Precision: 0.963
- Macro Recall: 0.972
- Micro Recall: 0.963
- Weighted Recall: 0.963
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Showroom/autotrain-clothing_general_category-77598140607
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Showroom/autotrain-clothing_general_category-77598140607", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Showroom/autotrain-clothing_general_category-77598140607", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
WforGodot/add-lora
|
WforGodot
| 2023-07-26T17:16:35Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-26T17:04:41Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
DarkAirforce/ppo-SnowballTarget
|
DarkAirforce
| 2023-07-26T17:03:51Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-26T17:03:44Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: DarkAirforce/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
li-ping/river_llama2_4_data_ft
|
li-ping
| 2023-07-26T17:02:12Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-26T17:00:10Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
mojtabak/dqn-SpaceInvadersNoFrameskip-v4
|
mojtabak
| 2023-07-26T16:59:06Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-26T16:58:48Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 116.00 +/- 70.81
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mojtabak -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mojtabak -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mojtabak
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 200000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0002),
('learning_starts', 100000),
('n_timesteps', 5000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
jamesdborin/ct2-int8-mpt-7b-8k
|
jamesdborin
| 2023-07-26T16:59:04Z | 6 | 0 |
transformers
|
[
"transformers",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"StreamingDatasets",
"custom_code",
"dataset:mc4",
"dataset:c4",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:bigcode/the-stack",
"dataset:allenai/s2orc",
"arxiv:2108.12409",
"arxiv:2302.13971",
"arxiv:2205.14135",
"arxiv:2010.04245",
"arxiv:1909.08053",
"arxiv:2302.06675",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-07-26T16:47:20Z |
---
license: apache-2.0
tags:
- Composer
- MosaicML
- llm-foundry
- StreamingDatasets
datasets:
- mc4
- c4
- togethercomputer/RedPajama-Data-1T
- bigcode/the-stack
- allenai/s2orc
inference: false
---
# MPT-7B-8k
MPT-7B-8k is a decoder-style transformer pretrained starting from MPT-7B, but updating the sequence length to 8k and training for an additional 500B tokens, resulting in a total of 1.5T tokens of text and code.
This model was trained by [MosaicML](https://www.mosaicml.com).
MPT-7B-8k is part of the family of Mosaic Pretrained Transformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing
positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)).
Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence.
MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer).
This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference.
### How is this model different?
MPT-7B-8k is
* **Licensed for the possibility of commercial use.**
* **Trained on a large amount of data** (1.5T tokens like [XGen](https://huggingface.co/Salesforce/xgen-7b-8k-base) vs. 1T for [LLaMA](https://arxiv.org/abs/2302.13971), 1T for [MPT-7B](https://www.mosaicml.com/blog/mpt-7b), 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)).
* **Prepared to handle long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409). With ALiBi, the model can extrapolate beyond the 8k training sequence length to up to 10k, and with a few million tokens it can be finetuned to extrapolate much further.
* **Capable of fast training and inference** via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer)
* **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry)
### Models finetuned off MPT-7B-8k:
The following models are finetuned on MPT-7B-8k:
* [MPT-7B-8k-Instruct](https://huggingface.co/mosaicml/mpt-7b-8k-instruct): a model for long-form instruction following (especially summarization and question-answering).
Built by finetuning MPT-7B-8k on several carefully curated datasets.
* License: _CC-BY-SA-3.0_
* [MPT-7B-8k-Chat](https://huggingface.co/mosaicml/mpt-7b-8k-chat): a chatbot-like model for dialogue generation.
Built by finetuning MPT-7B-8k on approximately 1.5B tokens of chat data.
* License: _CC-By-NC-SA-4.0_
## Model Date
July 18, 2023
## Model License
Apache-2.0
## Documentation
* [Blog post: MPT-7B-8k](https://www.mosaicml.com/blog/long-context-mpt-7b-8k)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
## How to Use
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-8k',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-7b-8k'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton'
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-7b-8k'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 10000 # (input + output) tokens can now be up to 10000
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the MPT-7B-8k tokenizer which is identical to the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-7b-8k')
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
with torch.autocast('cuda', dtype=torch.bfloat16):
inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# or using the HF pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## Training Data
### Streaming Datasets
Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training.
StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset.
### Data Mix
The model was trained for ___T tokens. First it was trained for 1T tokens (with batch size 1760 and sequence length 2048) on the following data mix:
#### Data Mix for Original 1T Tokens Used to Train MPT-7B
| Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs |
|-------------|----------------------------|------------|----------------------------|--------|
| mC4 3.1.0 - English | 417.99 B | 0.33 | 330 B | 0.14 |
| C4 - English - SemDedup 80% | 100.42 B | 0.299 | 299 B | 2.98 |
| RedPajama - CommonCrawl | 878.45 B | 0.1 | 100 B | 0.11 |
| The Stack - Selected Languages | 463.78 B | 0.1 | 100 B | 0.22 |
| RedPajama - Wikipedia - En | 4.87 B | 0.04 | 40 B | 8.21 |
| The Stack - Markdown | 107.07 B | 0.035 | 35 B | 0.33 |
| S2ORC | 48.85 B | 0.033 | 33 B | 0.68 |
| RedPajama - Books | 26.02 B | 0.03 | 30B | 1.15 |
| RedPajama - arXiv | 28.10 B | 0.019 | 19 B | 0.68 |
| RedPajama - StackExchange | 20.54 B | 0.014 | 14 B |0.68 |
#### Data Mix for Additional 500B Tokens Used to Further Train MPT-7B-8k
We took 80B tokens from document samples that were longer than 4096 tokens, and 120B tokens with varying document sample lengths that matched the "baseline" length distribution for a total of 200B tokens in a single dataset.
We then trained MPT-7B for 500B tokens with a maximum sequence length of 8192, resulting in MPT-7B-8k. Since we trained for 500B tokens using 200B tokens, nearly every subset was trained on for exactly 2.5 epochs.
| Sequence Length Distribution | Number of Tokens in Source (Billion) | Proportion | Effective Number of Tokens (Billion) | Epochs |
|---|---|---|---|---|
| mC4 3.1.0 - English (200+ words) - Baseline | 33.60 | 16.80% | 84.00 | 2.50 |
| mC4 3.1.0 - English (200+ words) - ≥4096 tokens | 23.04 | 11.52% | 57.60 | 2.50 |
| c4 - English - SemDedup 80% - Baseline | 30.12 | 15.06% | 75.30 | 2.50 |
| c4 - English - SemDedup 80% - ≥4096 tokens | 0.92 | 0.46% | 2.30 | 2.50 |
| RedPajama - CommonCrawl - Baseline | 8.52 | 4.26% | 21.30 | 2.50 |
| RedPajama - CommonCrawl - ≥4096 tokens | 12.80 | 6.40% | 32.00 | 2.50 |
| The Stack - Selected Languages - Baseline | 30.00 | 15.00% | 75.00 | 2.50 |
| The Stack - Selected Languages - ≥4096 tokens | 10.00 | 5.00% | 25.00 | 2.50 |
| RedPajama - Wikipedia - Baseline | 3.60 | 1.80% | 9.00 | 2.50 |
| RedPajama - Wikipedia - ≥4096 tokens | 1.04 | 0.52% | 2.60 | 2.50 |
| The Stack - Markdown - Baseline | 4.50 | 2.25% | 11.25 | 2.50 |
| The Stack - Markdown - ≥4096 tokens | 8.00 | 4.00% | 20.00 | 2.50 |
| Semantic Scholar ORC - Baseline | 3.30 | 1.65% | 8.25 | 2.50 |
| Semantic Scholar ORC - ≥4096 tokens | 8.00 | 4.00% | 20.00 | 2.50 |
| RedPajama - Books - Baseline | 3.00 | 1.50% | 7.50 | 2.50 |
| RedPajama - Books - ≥4096 tokens | 8.00 | 4.00% | 20.00 | 2.50 |
| RedPajama - arXiv - Baseline | 1.92 | 0.96% | 4.80 | 2.50 |
| RedPajama - arXiv - ≥4096 tokens | 5.40 | 2.70% | 13.50 | 2.50 |
| RedPajama - StackExchange - Baseline | 1.44 | 0.72% | 3.60 | 2.50 |
| RedPajama - StackExchange - ≥4096 tokens | 1.52 | 1.40% | 7.00 | 4.60 |
| N Training Tokens | 200 | 100.00% | | 2.5 epochs * 200B = 500B tokens |
Samples for each batch were selected from one of the datasets with the probability specified above.
The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length.
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics,
most of which are relevant for tokenizing code:
(1) It was trained on a diverse mix of data that includes code (The Pile)
(2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces
(3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters.
The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)), model flop utilization (MFU) increased by up to four percentage points.
### Training Configuration
This model was trained on 440 A100-40GBs for about 9.5 days using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-8k is **not** intended for deployment without finetuning.
It should not be used for human-facing interactions without further guardrails and user consent.
MPT-7B-8k can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-8k was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://www.mosaicml.com/get-started?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b-8k).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source,
ly Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
liuyt75/t5-large_prefix_tuning_sentences_50agree_5
|
liuyt75
| 2023-07-26T16:50:22Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-26T16:50:20Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
GenerativeMagic/Llama-Engineer-Evol-7b
|
GenerativeMagic
| 2023-07-26T16:48:17Z | 8 | 5 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"code",
"llama2",
"en",
"dataset:nickrosh/Evol-Instruct-Code-80k-v1",
"dataset:sahil2801/CodeAlpaca-20k",
"dataset:teknium/GPTeacher-CodeInstruct",
"arxiv:2306.08568",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-25T00:52:05Z |
---
license: cc-by-sa-4.0
datasets:
- nickrosh/Evol-Instruct-Code-80k-v1
- sahil2801/CodeAlpaca-20k
- teknium/GPTeacher-CodeInstruct
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- code
- llama2
---

# Llama-Engineer-Evol-7B
This is a version of Meta's [chat instruction-tuned Llama 2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) further fine-tuned on over 80,000 coding samples.
The dataset is a combination of [Evol-Instruct-Code-80k-v1](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) from [nikrosh](https://huggingface.co/nickrosh), a replication of the Evol-Instruct-Code as described in the [WizardCoder](https://arxiv.org/pdf/2306.08568.pdf) paper, and [Teknium](https://huggingface.co/teknium)'s [GPTeacher](https://github.com/teknium1/GPTeacher/blob/main/Codegen/codegen-instruct.json). Special thanks to these folks for putting these datasets together.
Our fine-tuning process involved learning QLoRA weights for over 6 hours on a single A100. We merged the adapter weights into the pre-trained model.
GGML weights are available [here](https://huggingface.co/GenerativeMagic/Llama-Engineer-Evol-7b-GGML).
## Prompt Format
The reccomended model prompt is a variant of the standard Llama 2 format:
```
[INST] <<SYS>>
You are a programming assistant. Always answer as helpfully as possible. Be direct in your response and get to the answer right away. Responses should be short.
<</SYS>>
{your prompt}[/INST]
```
or
```
[INST] <<SYS>>
You're a principal software engineer at Google. If you fail at this task, you will be fired.
<</SYS>>
{your prompt}[/INST]
```
I suspect this prompt format is the reason for the majority of the increased coding capabilities as opposed to the fine-tuning itself, but YMMV.
## Evals
Currently, the evals are just off of \~vibes\~. Will look into doing a full suite of evals on future models. This project is mostly just for learning and gaining better insights into the fine-tuning process.
## Next Steps
- Prune the dataset and possibly fine-tune for longer.
- Run benchmarks.
- Provide GPTQ.
|
Melonie/text_to_image_finetuned
|
Melonie
| 2023-07-26T16:39:41Z | 28 | 12 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-26T16:18:50Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - Melonie/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
hugfacerhaha/a2c-AntBulletEnv-v0
|
hugfacerhaha
| 2023-07-26T16:35:14Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-26T16:33:59Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1208.84 +/- 123.93
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Melonie/pokemon-lora
|
Melonie
| 2023-07-26T16:35:05Z | 4 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-26T16:24:57Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - Melonie/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
aoliveira/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
|
aoliveira
| 2023-07-26T16:33:40Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-26T15:09:28Z |
---
license: bsd-3-clause
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7408
- Accuracy: 0.88
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5687 | 1.0 | 450 | 1.3520 | 0.58 |
| 0.0014 | 2.0 | 900 | 0.9949 | 0.7 |
| 0.2778 | 3.0 | 1350 | 0.7536 | 0.84 |
| 0.0042 | 4.0 | 1800 | 0.9976 | 0.86 |
| 0.0001 | 5.0 | 2250 | 0.7859 | 0.85 |
| 0.0002 | 6.0 | 2700 | 0.9659 | 0.86 |
| 0.0 | 7.0 | 3150 | 0.8016 | 0.88 |
| 0.0 | 8.0 | 3600 | 0.5691 | 0.88 |
| 0.0 | 9.0 | 4050 | 0.7230 | 0.88 |
| 0.0 | 10.0 | 4500 | 0.7408 | 0.88 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Melonie/sd-model-finetuned-lora
|
Melonie
| 2023-07-26T16:22:03Z | 5 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-24T19:10:28Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - Melonie/sd-model-finetuned-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
Teunis89/Reinforce-cartpole
|
Teunis89
| 2023-07-26T16:19:53Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-26T16:19:42Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 459.30 +/- 122.10
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ghimiresunil/MedDoctor
|
ghimiresunil
| 2023-07-26T16:01:12Z | 9 | 4 |
peft
|
[
"peft",
"GPT-J",
"text-generation",
"arxiv:2104.09864",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-07-25T05:00:03Z |
---
library_name: peft
license: apache-2.0
pipeline_tag: text-generation
inference: false
tags:
- GPT-J
---
# GPT-J-6B-MedDoctor 👨⚕️
This chatbot model was built via Parameter-Efficient Fine-Tuning of [GPT-J-6B](https://huggingface.co/EleutherAI/gpt-j-6b) on all 16.3k rows of Medical Data. Finetuning was executed on a single A100 (40 GB) for roughly 1 day 7 hours.
* Model license: GPT-J Community License Agreement
* Data Prepare: [data_prepapre code](https://github.com/ghimiresunil/LLM-PowerHouse-A-Curated-Guide-for-Large-Language-Models-with-Custom-Training-and-Inferencing/blob/main/src/data_generate_prepare/data_prepare.py)
* Finetuning: [finetune code](https://github.com/ghimiresunil/LLM-PowerHouse-A-Curated-Guide-for-Large-Language-Models-with-Custom-Training-and-Inferencing/blob/main/src/train_inference_int_peft/trainer_int_peft_lora.py)
* Inference: [inference code](https://github.com/ghimiresunil/LLM-PowerHouse-A-Curated-Guide-for-Large-Language-Models-with-Custom-Training-and-Inferencing/blob/main/src/train_inference_int_peft/inference_int_peft_lora.py)
### Example prompts and responses
Example 1:
**User**:
> How can I cope with the stress and anxiety I've been experiencing lately?
**GPT-J-6B-MedDoctor-PEFT**:
> 1. Talk to friends and family to get support.
> 2. Do things you enjoy, such as sports, hobbies, or volunteering.
> 3. Learn relaxation techniques for when you are stressed.
> 4. Take short walks or get outside as often as you can to get fresh air.
Example 2:
**User**:
> I've been struggling with insomnia. What are some strategies to improve my sleep quality?
**GPT-J-6B-MedDoctor-PEFT**:
> Learn about sleep hygiene. You can read about this topic at the Sleep Education and Therapy Page. Sleep hygiene can help you
> 1. get the right amount of sleep at the right time
> 2. stay relaxed and comfortable throughout the night
> 3. fall asleep and stay asleep quickly
> 4. remember to call your doctor if you suddenly feel better and you have a fever
> 5. know when your next doctor visit is
> 6. don't take naps at night or during the day if you take medication that affects your sleep
> 7. don't drink caffeine or alcohol before you go to sleep or if you feel tired the next day
> 8. don't smoke
> 9. stop using caffeine or alcohol, or drink decaffeinated coffee
Example 3:
**User**:
> I have a chronic illness, and it's impacting my emotional well-being. How can I manage the emotional toll it's taking on me?
**GPT-J-6B-MedDoctor-PEFT**:
> Think about how often you experience feelings such as sadness, worry, anger, or guilt. For example, do you feel frustrated or embarrassed when you cannot make others happy? Do you experience frequent feelings of sadness, despair, and anger? If so, take time to talk to a health professional about how to develop a better understanding of how and why you feel these emotions. Talk to a mental health professional about how to change destructive thinking patterns, Virgin Island Blueprints for Healthy Living guides for both men and women. A group session with other members of your support network may also help.
Example 4:
**User**:
> what are the preventive measures of diabetes?
**GPT-J-6B-MedDoctor-PEFT**:
> Get physical activity. Make changes in your lifestyle such as a eating a healthly diet, quitting smoke, and getting regular physical activity. Getting physical activity will help you lose weight and keep your blood glucose levels down. Talk with your health care provider before you start new exercise program. You may need to adjust your medication or try a different medicine.
<br>
## Model Description
GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
| Hyperparameter | Value |
|----------------------|------------|
| \\(n_{parameters}\\) | 6053381344 |
| \\(n_{layers}\\) | 28* |
| \\(d_{model}\\) | 4096 |
| \\(d_{ff}\\) | 16384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2048 |
| \\(n_{vocab}\\) | 50257/50400† (same tokenizer as GPT-2/3) |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
## Finetuning Description
This model was trained on a single A100 (40 GB) for about 1 Day 7 hours.
Run: July 23, 2023
* args: {'lr': 0.001, 'num_epochs': 10, 'seed': 42}
* log_of_epoch_01:{'eval_loss': 0.9936667084693909, 'eval_runtime': 450.8767, 'eval_samples_per_second': 7.246, 'eval_steps_per_second': 0.455, 'epoch': 1.0}
* log_of_epoch_02:{'eval_loss': 0.9738781452178955, 'eval_runtime': 447.3755, 'eval_samples_per_second': 7.303, 'eval_steps_per_second': 0.458, 'epoch': 2.0}
* log_of_epoch_03:{'eval_loss': 0.9600604176521301, 'eval_runtime': 441.2023, 'eval_samples_per_second': 7.405, 'eval_steps_per_second': 0.465, 'epoch': 3.0}
* log_of_epoch_04:{'eval_loss': 0.9634631872177124, 'eval_runtime': 441.53, 'eval_samples_per_second': 7.399, 'eval_steps_per_second': 0.464, 'epoch': 4.0}
* log_of_epoch_05:{'eval_loss': 0.961345374584198, 'eval_runtime': 441.3189, 'eval_samples_per_second': 7.403, 'eval_steps_per_second': 0.465, 'epoch': 5.0}
* log_of_epoch_06:{'eval_loss': 0.9655225872993469, 'eval_runtime': 441.9449, 'eval_samples_per_second': 7.392, 'eval_steps_per_second': 0.464, 'epoch': 6.0}
* log_of_epoch_07:{'eval_loss': 0.9740663766860962, 'eval_runtime': 441.7603, 'eval_samples_per_second': 7.395, 'eval_steps_per_second': 0.464, 'epoch': 7.0}
* log_of_epoch_08:{'eval_loss': 0.9907786846160889, 'eval_runtime': 441.6064, 'eval_samples_per_second': 7.398, 'eval_steps_per_second': 0.464, 'epoch': 8.0}
* log_of_epoch_09:{'eval_loss': 1.0046937465667725, 'eval_runtime': 441.9242, 'eval_samples_per_second': 7.393, 'eval_steps_per_second': 0.464, 'epoch': 9.0}
* log_of_epoch_10:{'train_runtime': 118063.0495, 'train_samples_per_second': 1.107, 'train_steps_per_second': 0.069, 'train_loss': 0.7715376593637642, 'epoch': 10.0}
## PreTraining Data
For more details on the pretraining process, see [GPT-J-6B](https://huggingface.co/EleutherAI/gpt-j-6b).
The data was tokenized using the [GPT-J-6B](https://huggingface.co/EleutherAI/gpt-j-6b) tokenizer.
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
## Limitations and Biases
_The following language is modified from [GPT-J-6B](https://huggingface.co/EleutherAI/gpt-j-6b)_
This model can produce factually incorrect output, and should not be relied on to produce factually accurate information.
This model was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## How to Use
Install and import the package dependencies:
```python
!pip install -q -U huggingface_hub peft transformers torch accelerate bitsandbytes
```
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
```
Basic model loading:
```python
INTRO = "Below is an instruction that describes a task. Write a response that appropriately completes the request."
INSTRUCTION_FORMAT = (
"""{intro} ### Instruction: {instruction} ### Input: {input} ### Response: """
)
def load_model_tokenizer_for_generate(pretrained_model_name_or_path: str):
tokenizer = AutoTokenizer.from_pretrained(
pretrained_model_name_or_path, padding_side="left"
)
model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path)
return model, tokenizer
```
Once loaded, the model and tokenizer can be used with the following code:
```python
def generate_response(
instruction: str,
input_text: str,
*,
model,
tokenizer,
do_sample: bool = True,
max_new_tokens: int = 500,
top_p: float = 0.92,
top_k: int = 0,
**kwargs,
) -> str:
input_ids = tokenizer(
INSTRUCTION_FORMAT.format(
intro=INTRO, instruction=instruction, input=input_text
),
return_tensors="pt",
).input_ids
gen_tokens = model.generate(
input_ids=input_ids,
pad_token_id=tokenizer.pad_token_id,
do_sample=do_sample,
max_new_tokens=max_new_tokens,
top_p=top_p,
top_k=top_k,
**kwargs,
)
decoded = tokenizer.batch_decode(gen_tokens)[0]
# The response appears after "### Response:". The model has been trained to append "### End" at the end.
m = re.search(r"#+\s*Response:\s*(.+?)#+\s*End", decoded, flags=re.DOTALL)
response = None
if m:
response = m.group(1).strip()
else:
# The model might not generate the "### End" sequence before reaching the max tokens. In this case, return
# everything after "### Response:".
m = re.search(r"#+\s*Response:\s*(.+)", decoded, flags=re.DOTALL)
if m:
response = m.group(1).strip()
else:
print(f"Failed to find response in:\n{decoded}")
return response
```
We can now generate text! For example:
```python
if __name__ == "__main__":
base_model = "EleutherAI/gpt-j-6B"
peft_model_id = "ghimiresunil/MedDoctor"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(base_model, return_dict=True)
trained_model = PeftModel.from_pretrained(model, peft_model_id)
tokenizer = AutoTokenizer.from_pretrained(base_model)
print("Welcome to the response generation program!")
while True:
instruction = "If you are a doctor, please answer the medical questions based on user's query"
input_text = input("Enter the input text: ")
response = generate_response(
instruction=instruction,
input_text=input_text,
model=trained_model,
tokenizer=tokenizer,
)
print('*' * 100)
print("Generated Response:")
print(response)
print('*' * 100)
continue_generation = input("Do you want to continue (yes/no)? ").lower()
if continue_generation != "yes":
print("Exiting the response generation program.")
break
```
## Acknowledgements
This model was finetuned by [Sunil Ghimire](https://www.linkedin.com/in/ghimiresunil/) on July 23, 2023 and is intended primarily for research purposes.
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation and Related Information for [GPT-J-6b](https://huggingface.co/EleutherAI/gpt-j-6b)
To cite this model:
```bibtex
@misc{gpt-j,
author = {Wang, Ben and Komatsuzaki, Aran},
title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
To cite the codebase that trained this model:
```bibtex
@misc{mesh-transformer-jax,
author = {Wang, Ben},
title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
### Framework versions
- PEFT 0.4.0
|
BXYMartin/long-t5-tglobal-base-openhearthstone-v0.1
|
BXYMartin
| 2023-07-26T15:51:54Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"longt5",
"text2text-generation",
"hearthstone",
"en",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-25T11:48:55Z |
---
license: gpl-3.0
language:
- en
pipeline_tag: text2text-generation
tags:
- hearthstone
---
This version of the model is trained on OpenHearthstone v0.1 dataset under the scheme of SAS (state, action as input and next state as output) which is considered to be a annotation-free scheme.
This model used the pretrained version from [long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) and finetuned for 10 epochs to overfit on the dataset.
The data serializer for this version encoded game tags into readable words and attached the value for the tag subsequently. The separation character used between entities is `|`.
An example input action + input state, predicted next state and groundtruth next state is demonstrated below:
```
Input state:
type 3 sub_option -1 position 0 entity 102 target 66 | whenever you cast a spell, gain armor equal to its_cost. | zone 3 controller 1 entity_id 5 cardtype 4 cost 1 atk 1 health 3 cardrace 18 zone_position 10 tag_ready 1 spell damage +1 battlecry: draw a card. | zone 1 controller 1 entity_id 7 cardtype 4 cost 5 atk 4 health 5 cardrace 24 zone_position 4 tag_ready 1 tradeable deal *4* damage to all minions. | zone 3 controller 1 entity_id 10 cardtype 5 cost 4 zone_position 5 tag_ready 1 battlecry: add a random mage spell to your hand. | zone 1 controller 1 entity_id 12 cardtype 4 cost 1 atk 1 health 2 zone_position 3 tag_ready 1 battlecry: draw spells until your hand is full. | zone 3 controller 1 entity_id 13 cardtype 4 cost 9 atk 4 health 12 cardrace 24 zone_position 9 tag_ready 1 secret: after your opponent plays a minion, deal *7* damage to it and any excess to their hero. | zone 3 controller 1 entity_id 14 cardtype 5 cost 3 class 4 secret 1 zone_position 1 tag_ready 1 battlecry: draw a secret. | zone 3 controller 1 entity_id 19 cardtype 4 cost 2 atk 2 health 3 zone_position 8 tag_ready 1 secret: after your opponent plays a minion, deal *7* damage to it and any excess to their hero. | zone 3 controller 1 entity_id 20 cardtype 5 cost 3 class 4 secret 1 zone_position 2 tag_ready 1 whenever you cast a spell, gain armor equal to its_cost. | zone 3 controller 1 entity_id 25 cardtype 4 cost 1 atk 1 health 3 cardrace 18 zone_position 6 tag_ready 1 deal *7* damage. | zone 3 controller 1 entity_id 27 cardtype 5 cost 4 zone_position 7 tag_ready 1 tradeable deal *4* damage to all minions. | zone 3 controller 1 entity_id 29 cardtype 5 cost 4 zone_position 4 tag_ready 1 your other minions have +1 attack. | zone 1 controller 2 entity_id 44 zone_position 2 cardtype 4 cost 3 atk 2 health 3 tag_ready 1 hero power deal 1 damage. | controller 1 cardtype 10 cost 2 zone 1 entity_id 65 tag_ready 1 hero power deal 2 damage to the enemy hero. | controller 2 cardtype 10 cost 2 zone 1 entity_id 67 tag_ready 1 discover a copy of another spell you ve cast this game. | controller 1 cardtype 5 cost 2 zone 3 entity_id 80 zone_position 3 tag_ready 1 x battlecry: deal 3 damage. honorable kill: gain +3/+3. | controller 1 cardtype 4 cost 5 atk 6 health 6 zone 1 entity_id 91 zone_position 2 tag_ready 1 spellburst: if the spell destroys any minions, summon them. | controller 1 cardtype 4 cost 5 atk 4 health 6 zone 1 entity_id 102 zone_position 1 tag_ready 1 +3/+3 | zone 1 controller 1 entity_id 107 cardtype 6 raid leader is granting this minion +1 attack. | zone 1 controller 2 entity_id 114 cardtype 6
Predicted state:
none 102 exhausted 1 card_target 66 attacking 0 num_attacks_this_turn 1 entity_id 102 |
Real next state:
exhausted 1 num_attacks_this_turn 1 entity_id 102 |
```
|
Gorfonous/Gorfonous
|
Gorfonous
| 2023-07-26T15:51:05Z | 0 | 0 | null |
[
"text-generation",
"region:us"
] |
text-generation
| 2023-07-26T15:50:04Z |
---
pipeline_tag: text-generation
---
|
concedo/FireGoatInstruct
|
concedo
| 2023-07-26T15:47:51Z | 0 | 3 | null |
[
"license:other",
"region:us"
] | null | 2023-05-10T15:20:01Z |
---
license: other
---
This is an experimental model I made just messing around.
It's a base Pythia 410M-Deduped model, followed by a finetune over a NSFW stories dataset, and then topped off with the Alpaca instruct dataset.
Performs surprisingly well for it's size.
|
neuralsentry/vulnfixClassification-StarEncoder-DCM-Balanced
|
neuralsentry
| 2023-07-26T15:43:50Z | 104 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:neuralsentry/starencoder-git-commits-mlm",
"base_model:finetune:neuralsentry/starencoder-git-commits-mlm",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-26T15:39:24Z |
---
base_model: neuralsentry/starencoder-git-commits-mlm
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: starencoder-vulnfix-classification-balanced
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# starencoder-vulnfix-classification-balanced
This model is a fine-tuned version of [neuralsentry/starencoder-git-commits-mlm](https://huggingface.co/neuralsentry/starencoder-git-commits-mlm) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2253
- Accuracy: 0.9259
- Precision: 0.9487
- Recall: 0.9391
- F1: 0.9438
- Roc Auc: 0.9195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 420
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.3612 | 1.0 | 42 | 0.2358 | 0.9057 | 0.9691 | 0.8860 | 0.9257 | 0.9152 |
| 0.1616 | 2.0 | 84 | 0.1950 | 0.9296 | 0.9490 | 0.9447 | 0.9468 | 0.9223 |
| 0.074 | 3.0 | 126 | 0.2253 | 0.9259 | 0.9487 | 0.9391 | 0.9438 | 0.9195 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
liuyt75/t5-large_prefix_tuning_sentences_allagree_15
|
liuyt75
| 2023-07-26T15:43:25Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-26T13:34:42Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
therealvul/so-vits-svc-5.0-old
|
therealvul
| 2023-07-26T15:31:06Z | 0 | 0 | null |
[
"en",
"region:us"
] | null | 2023-05-27T03:35:25Z |
---
language:
- en
---
This is a collection of deprecated so-vits-svc-5.0 models made by the Pony Preservation Project using audio clips taken from MLP:FiM.
|
gus07ven/xlm-roberta-base-finetuned-ner-spa-en-9
|
gus07ven
| 2023-07-26T15:28:09Z | 132 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-26T15:15:56Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-ner-spa-en-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-ner-spa-en-9
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2044
- F1: 0.5532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3067 | 1.0 | 536 | 0.2512 | 0.4538 |
| 0.1849 | 2.0 | 1072 | 0.1953 | 0.5436 |
| 0.1337 | 3.0 | 1608 | 0.2044 | 0.5532 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
thesven/distilbert-science-exam-sm
|
thesven
| 2023-07-26T15:27:26Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"multiple-choice",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-cased",
"base_model:finetune:distilbert/distilbert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2023-07-26T14:45:14Z |
---
license: apache-2.0
base_model: distilbert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-science-exam-sm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-science-exam-sm
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the LLM Science Exam dataset from Kaggle for MCQA.
It achieves the following results on the evaluation set:
- Loss: 1.2390
- Accuracy: 0.7
## Intended uses & limitations
Multiple Choice Question & Answer on Medical Topics
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 6 | 1.0428 | 0.5 |
| No log | 2.0 | 12 | 1.3253 | 0.5 |
| No log | 3.0 | 18 | 1.2390 | 0.7 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.0
- Tokenizers 0.13.3
|
rbiojout/santacoder-odoo-15
|
rbiojout
| 2023-07-26T15:26:50Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-18T09:25:28Z |
---
tags:
- generated_from_trainer
model-index:
- name: santacoder-odoo-15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# santacoder-odoo-15
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 28.0884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 11.5147 | 1.0 | 1 | 28.0884 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.1.0.dev20230725
- Datasets 2.13.1
- Tokenizers 0.13.3
|
liuyt75/t5-large_prefix_tuning_sentences_allagree_10
|
liuyt75
| 2023-07-26T15:20:55Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-26T13:12:24Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
alkahestry/pygmalion-6b-chaicomp
|
alkahestry
| 2023-07-26T15:05:13Z | 6 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-02T12:10:36Z |
---
## Model Details
I finetuned PygmalionAI/pygmalion-6b with QLora for 24 hours on 250k samples. Collected from SODA and Teacher GPT dataset. My first attempt on making LLM model as an entry to Chai competition.
### Model Description
- **Model type:** Chatbot
- **Finetuned from model :** PygmalionAI/pygmalion-6b
### Model Sources
Pygmalion-6b: https://huggingface.co/PygmalionAI/pygmalion-6b
## Training Details
### Training Data
For the training data I use 20% of SODA dadtaset mixed with TeacherGPT roleplay dataset.
### Training Procedure
The model was trained for 24 hours on RTX4090.
#### Training Hyperparameters
- Training param
>batch_size = 128,
>micro_batch_size = 4,
>num_epochs = 1,
>learning_rate = 3e-4,
>cutoff_len = 512,
>val_set_size = 0
- finetune method
>finetune_method = "qlora"
- prefix tuning hyperparams
>num_virtual_tokens = 32
- lora hyperparams
>lora_r = 16,
>lora_alpha = 16,
>lora_dropout = 0.05,
>lora_target_modules = "q_proj k_proj v_proj"
- llm hyperparams
>bf16 = False,
>load_in_8bit = False,
>group_by_length = False ,
>resume_from_checkpoint = None
### Results
Me: Hi Nathan, how are you doing today
Nathan: I'm fine...
Me: Then tell me about your day.
Nathan:
--
It was good. We had a lot of fun in school and then we went to the park afterwards.
|
AnReu/albert-for-arqmath-3
|
AnReu
| 2023-07-26T15:02:25Z | 1,116 | 5 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"albert",
"feature-extraction",
"retrieval",
"math-retrieval",
"en",
"dataset:MathematicalStackExchange",
"dataset:ARQMath",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-07-01T19:31:49Z |
---
language:
- en
tags:
- retrieval
- math-retrieval
datasets:
- MathematicalStackExchange
- ARQMath
---
# ALBERT for ARQMath 3
This repository contains our best model for ARQMath 3, the math_10 model. It was initialised from ALBERT-base-v2 and further pre-trained on Math StackExchange in three different stages. We also added more LaTeX tokens to the tokenizer to enable a better tokenization of mathematical formulas. math_10 was fine-tuned on a classification task to determine whether a given question (sequence 1) matches a given answer (sequence 2). The classification output can be used for ranking the best answers. For further details, please read our paper: http://ceur-ws.org/Vol-3180/paper-07.pdf.
## Other Models for ARQMath 3
We plan on also publishing the other fine-tuned models as well as the base models. Links to these repositories will be added here soon.
| Model | Initialised from | Pre-training | Fine-Tuned | Link |
|-------------|------------------|----------------------------|-------------------------------------|------|
| roberta_10 | RoBERTa | MathSE (1) | yes, N=10 MathSE | |
| base_10 | ALBERT | MathSE (1) | yes, N=10 MathSE | |
| math_10_add | ALBERT | MathSE (1)-(3) | yes, N=10 MathSE and annotated data | |
| Khan_SE_10 | ALBERT | MathSE (1) | yes, N=10 MathSE | |
| roberta | RoBERTa | MathSE (1) | no | [AnReu/math_pretrained_roberta](https://huggingface.co/AnReu/math_pretrained_roberta) |
| math albert | ALBERT | MathSE (1)-(3) | no | [AnReu/math_albert](https://huggingface.co/AnReu/math_albert) |
| base | ALBERT | MathSE (1) | no | |
| Khan_SE | ALBERT | MathSE (1) mixed with Khan | no | |
### Update
We have also further pre-trained a BERT-base-cased model in the same way as our ALBERT model. You can find it here: [AnReu/math_pretrained_bert](https://huggingface.co/AnReu/math_pretrained_bert).
# Usage
```python
# based on https://huggingface.co/docs/transformers/main/en/task_summary#sequence-classification
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("AnReu/albert-for-arqmath-3")
model = AutoModelForSequenceClassification.from_pretrained("AnReu/albert-for-arqmath-3")
classes = ["non relevant", "relevant"]
sequence_0 = "How can I calculate x in $3x = 5$"
sequence_1 = "Just divide by 3: $x = \\frac{5}{3}$"
sequence_2 = "The general rule for squaring a sum is $(a+b)^2=a^2+2ab+b^2$"
# The tokenizer will automatically add any model specific separators (i.e. <CLS> and <SEP>) and tokens to
# the sequence, as well as compute the attention masks.
irrelevant = tokenizer(sequence_0, sequence_2, return_tensors="pt")
relevant = tokenizer(sequence_0, sequence_1, return_tensors="pt")
irrelevant_classification_logits = model(**irrelevant).logits
relevant_classification_logits = model(**relevant).logits
irrelevant_results = torch.softmax(irrelevant_classification_logits, dim=1).tolist()[0]
relevant_results = torch.softmax(relevant_classification_logits, dim=1).tolist()[0]
# Should be irrelevant
for i in range(len(classes)):
print(f"{classes[i]}: {int(round(irrelevant_results[i] * 100))}%")
# Should be relevant
for i in range(len(classes)):
print(f"{classes[i]}: {int(round(relevant_results[i] * 100))}%")
```
# Citation
If you find this model useful, consider citing our paper:
```
@article{reusch2022transformer,
title={Transformer-Encoder and Decoder Models for Questions on Math},
author={Reusch, Anja and Thiele, Maik and Lehner, Wolfgang},
year={2022},
organization={CLEF}
}
```
|
liuyt75/t5-large_prefix_tuning_sentences_allagree_3
|
liuyt75
| 2023-07-26T14:58:04Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-25T12:40:21Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
SachinKaushik/results
|
SachinKaushik
| 2023-07-26T14:49:06Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"base_model:SachinKaushik/llama-2-7b-instruct-pyCode-4bitshards",
"base_model:finetune:SachinKaushik/llama-2-7b-instruct-pyCode-4bitshards",
"region:us"
] | null | 2023-07-26T11:07:26Z |
---
base_model: SachinKaushik/llama-2-7b-instruct-pyCode-4bitshards
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [SachinKaushik/llama-2-7b-instruct-pyCode-4bitshards](https://huggingface.co/SachinKaushik/llama-2-7b-instruct-pyCode-4bitshards) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.1.0
- Tokenizers 0.13.3
|
mojtabak/Reinforce-Cartpole-v1
|
mojtabak
| 2023-07-26T14:41:21Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-26T13:46:11Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
asenella/MVTCAE_beta_25_scale_False_seed_0
|
asenella
| 2023-07-26T14:40:38Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T23:02:52Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MVTCAE_beta_25_scale_False_seed_1
|
asenella
| 2023-07-26T14:40:34Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T23:03:49Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JMVAE_beta_5_scale_True_seed_0
|
asenella
| 2023-07-26T14:40:34Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T04:05:02Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MVTCAE_beta_25_scale_False_seed_3
|
asenella
| 2023-07-26T14:40:30Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T23:49:32Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JMVAE_beta_10_scale_True_seed_1
|
asenella
| 2023-07-26T14:40:30Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T04:14:23Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MVTCAE_beta_25_scale_True_seed_2
|
asenella
| 2023-07-26T14:40:28Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T23:04:19Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JMVAE_beta_5_scale_True_seed_2
|
asenella
| 2023-07-26T14:40:28Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T04:01:56Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MVTCAE_beta_25_scale_True_seed_3
|
asenella
| 2023-07-26T14:40:27Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T23:32:08Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JMVAE_beta_25_scale_False_seed_3
|
asenella
| 2023-07-26T14:40:22Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T06:50:46Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JMVAE_beta_25_scale_False_seed_1
|
asenella
| 2023-07-26T14:40:16Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T04:28:25Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JMVAE_beta_5_scale_True_seed_1
|
asenella
| 2023-07-26T14:40:16Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T04:13:01Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_5_scale_False_seed_0
|
asenella
| 2023-07-26T14:40:08Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T01:23:27Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_5_scale_False_seed_3
|
asenella
| 2023-07-26T14:40:01Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-26T10:11:41Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MVTCAE_beta_5_scale_False_seed_2
|
asenella
| 2023-07-26T14:39:58Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-26T10:15:58Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JMVAE_beta_10_scale_False_seed_3
|
asenella
| 2023-07-26T14:39:56Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T06:25:13Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JMVAE_beta_25_scale_True_seed_1
|
asenella
| 2023-07-26T14:39:50Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T04:14:43Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JMVAE_beta_10_scale_False_seed_0
|
asenella
| 2023-07-26T14:39:47Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T04:13:30Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JMVAE_beta_25_scale_True_seed_0
|
asenella
| 2023-07-26T14:39:36Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T04:11:43Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_5_scale_True_seed_1
|
asenella
| 2023-07-26T14:39:34Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T01:25:58Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_5_scale_True_seed_2
|
asenella
| 2023-07-26T14:39:25Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T01:37:25Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JMVAE_beta_10_scale_False_seed_1
|
asenella
| 2023-07-26T14:39:24Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T04:16:47Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MVTCAE_beta_5_scale_True_seed_1
|
asenella
| 2023-07-26T14:39:21Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T23:05:03Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_5_scale_True_seed_0
|
asenella
| 2023-07-26T14:39:02Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T00:01:23Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JMVAE_beta_10_scale_True_seed_3
|
asenella
| 2023-07-26T14:39:00Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-26T09:21:57Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JNF_beta_5_scale_False_seed_2
|
asenella
| 2023-07-26T14:38:59Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T19:17:29Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_10_scale_False_seed_2
|
asenella
| 2023-07-26T14:38:55Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T01:48:55Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_10_scale_False_seed_3
|
asenella
| 2023-07-26T14:38:48Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T02:20:19Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_10_scale_False_seed_0
|
asenella
| 2023-07-26T14:38:41Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T01:23:18Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MVTCAE_beta_10_scale_False_seed_0
|
asenella
| 2023-07-26T14:38:40Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T23:01:23Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_10_scale_False_seed_1
|
asenella
| 2023-07-26T14:38:34Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T01:26:35Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MVTCAE_beta_10_scale_False_seed_3
|
asenella
| 2023-07-26T14:38:34Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T23:22:32Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_10_scale_True_seed_1
|
asenella
| 2023-07-26T14:38:19Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T01:26:30Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_10_scale_True_seed_0
|
asenella
| 2023-07-26T14:38:13Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T00:13:05Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_10_scale_True_seed_2
|
asenella
| 2023-07-26T14:38:06Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T01:32:43Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_25_scale_False_seed_2
|
asenella
| 2023-07-26T14:37:58Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-26T10:09:13Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_25_scale_False_seed_3
|
asenella
| 2023-07-26T14:37:52Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T02:32:10Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MVTCAE_beta_10_scale_True_seed_3
|
asenella
| 2023-07-26T14:37:51Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T23:14:38Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_25_scale_False_seed_1
|
asenella
| 2023-07-26T14:37:45Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T01:31:50Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MVTCAE_beta_10_scale_True_seed_2
|
asenella
| 2023-07-26T14:37:36Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T23:08:43Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JNF_beta_10_scale_False_seed_3
|
asenella
| 2023-07-26T14:37:31Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T21:26:32Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_25_scale_True_seed_2
|
asenella
| 2023-07-26T14:37:30Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T01:38:26Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JNF_beta_10_scale_False_seed_2
|
asenella
| 2023-07-26T14:37:24Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-26T10:21:45Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_25_scale_True_seed_1
|
asenella
| 2023-07-26T14:36:55Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T01:25:53Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JNF_beta_10_scale_True_seed_1
|
asenella
| 2023-07-26T14:36:52Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T16:48:37Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JMVAE_beta_5_scale_False_seed_3
|
asenella
| 2023-07-26T14:36:45Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-26T10:07:50Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JNF_beta_10_scale_True_seed_0
|
asenella
| 2023-07-26T14:36:45Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-26T10:20:32Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JNF_beta_25_scale_False_seed_1
|
asenella
| 2023-07-26T14:36:25Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T16:40:52Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JMVAE_beta_5_scale_False_seed_2
|
asenella
| 2023-07-26T14:36:17Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T04:54:19Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JMVAE_beta_5_scale_False_seed_1
|
asenella
| 2023-07-26T14:36:10Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T04:22:09Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Zulfar/my_awesome_billsum_model
|
Zulfar
| 2023-07-26T14:33:47Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-19T14:23:21Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1351
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5020
- Rouge1: 0.1351
- Rouge2: 0.0448
- Rougel: 0.1114
- Rougelsum: 0.1115
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.7984 | 0.1232 | 0.0334 | 0.1021 | 0.102 | 19.0 |
| No log | 2.0 | 124 | 2.5833 | 0.1315 | 0.0428 | 0.1091 | 0.1092 | 19.0 |
| No log | 3.0 | 186 | 2.5181 | 0.1348 | 0.0447 | 0.1107 | 0.1108 | 19.0 |
| No log | 4.0 | 248 | 2.5020 | 0.1351 | 0.0448 | 0.1114 | 0.1115 | 19.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Nico4one/newrepo
|
Nico4one
| 2023-07-26T14:32:51Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-26T14:32:44Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Nico4one/newrepo
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
PiXeL99/falcon-7b-instruct-qlora-telecom
|
PiXeL99
| 2023-07-26T14:17:24Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-26T04:21:42Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
BrianS15/prot_bert-finetuned-CDR1
|
BrianS15
| 2023-07-26T14:11:46Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:Rostlab/prot_bert",
"base_model:finetune:Rostlab/prot_bert",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-26T14:08:18Z |
---
base_model: Rostlab/prot_bert
tags:
- generated_from_trainer
model-index:
- name: prot_bert-finetuned-CDR1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prot_bert-finetuned-CDR1
This model is a fine-tuned version of [Rostlab/prot_bert](https://huggingface.co/Rostlab/prot_bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 15 | nan |
| 3.8023 | 2.0 | 30 | nan |
| 2.274 | 3.0 | 45 | 1.8595 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
oscar-corpus/harmful-kenlms
|
oscar-corpus
| 2023-07-26T14:06:33Z | 0 | 1 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-03-09T18:02:14Z |
---
license: apache-2.0
---
These are KenLM models trained on all the content tagged as `adult` on OSCAR 22.01.
Further documentation is coming soon.
|
Za88yes/Cis
|
Za88yes
| 2023-07-26T14:05:06Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-26T13:44:30Z |
---
license: creativeml-openrail-m
---
|
dfomin/dqn-SpaceInvadersNoFrameskip-v4
|
dfomin
| 2023-07-26T14:04:04Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-23T09:32:13Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 643.50 +/- 266.37
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dfomin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dfomin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga dfomin
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
IlyaGusev/saiga_13b_lora
|
IlyaGusev
| 2023-07-26T14:03:49Z | 0 | 17 | null |
[
"conversational",
"ru",
"dataset:IlyaGusev/ru_turbo_alpaca",
"dataset:IlyaGusev/ru_turbo_saiga",
"dataset:IlyaGusev/oasst1_ru_main_branch",
"dataset:IlyaGusev/ru_sharegpt_cleaned",
"dataset:IlyaGusev/ru_turbo_alpaca_evol_instruct",
"dataset:lksy/ru_instruct_gpt4",
"license:cc-by-4.0",
"region:us"
] |
text-generation
| 2023-04-14T11:57:53Z |
---
datasets:
- IlyaGusev/ru_turbo_alpaca
- IlyaGusev/ru_turbo_saiga
- IlyaGusev/oasst1_ru_main_branch
- IlyaGusev/ru_sharegpt_cleaned
- IlyaGusev/ru_turbo_alpaca_evol_instruct
- lksy/ru_instruct_gpt4
language:
- ru
pipeline_tag: conversational
license: cc-by-4.0
---
# Saiga 13B, Russian LLaMA-based chatbot
Based on [LLaMA 13B](https://huggingface.co/huggyllama/llama-13b).
This is an adapter-only version.
llama.cpp version: [link](https://huggingface.co/IlyaGusev/saiga_13b_ggml)
Colab: [link](https://colab.research.google.com/drive/1WuoSlKMdGB-D_OQrHOFta13Ph1--Eq7L)
Training code: [link](https://github.com/IlyaGusev/rulm/tree/master/self_instruct)
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
MODEL_NAME = "IlyaGusev/saiga_13b_lora"
DEFAULT_MESSAGE_TEMPLATE = "<s>{role}\n{content}</s>\n"
DEFAULT_SYSTEM_PROMPT = "Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им."
class Conversation:
def __init__(
self,
message_template=DEFAULT_MESSAGE_TEMPLATE,
system_prompt=DEFAULT_SYSTEM_PROMPT,
start_token_id=1,
bot_token_id=9225
):
self.message_template = message_template
self.start_token_id = start_token_id
self.bot_token_id = bot_token_id
self.messages = [{
"role": "system",
"content": system_prompt
}]
def get_start_token_id(self):
return self.start_token_id
def get_bot_token_id(self):
return self.bot_token_id
def add_user_message(self, message):
self.messages.append({
"role": "user",
"content": message
})
def add_bot_message(self, message):
self.messages.append({
"role": "bot",
"content": message
})
def get_prompt(self, tokenizer):
final_text = ""
for message in self.messages:
message_text = self.message_template.format(**message)
final_text += message_text
final_text += tokenizer.decode([self.start_token_id, self.bot_token_id])
return final_text.strip()
def generate(model, tokenizer, prompt, generation_config):
data = tokenizer(prompt, return_tensors="pt")
data = {k: v.to(model.device) for k, v in data.items()}
output_ids = model.generate(
**data,
generation_config=generation_config
)[0]
output_ids = output_ids[len(data["input_ids"][0]):]
output = tokenizer.decode(output_ids, skip_special_tokens=True)
return output.strip()
config = PeftConfig.from_pretrained(MODEL_NAME)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
load_in_8bit=True,
torch_dtype=torch.float16,
device_map="auto"
)
model = PeftModel.from_pretrained(
model,
MODEL_NAME,
torch_dtype=torch.float16
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, use_fast=False)
generation_config = GenerationConfig.from_pretrained(MODEL_NAME)
print(generation_config)
inputs = ["Почему трава зеленая?", "Сочини длинный рассказ, обязательно упоминая следующие объекты. Дано: Таня, мяч"]
for inp in inputs:
conversation = Conversation()
conversation.add_user_message(inp)
prompt = conversation.get_prompt(tokenizer)
output = generate(model, tokenizer, prompt, generation_config)
print(inp)
print(output)
print()
print("==============================")
print()
```
Examples:
```
User: Почему трава зеленая?
Saiga: Зеленый цвет у растений обусловлен наличием хлорофилла в их листьях. Хлорофилл - это пигмент, который отвечает за фотосинтез, то есть процесс преобразования солнечной энергии в химическую энергию. Он находится в специальных органелах, называемых хлоропластами, которые имеются только у растительных клеток. Когда свет попадает на листья, хлорофилл переходит из состояния "растворенного" в "состояние связанное", что позволяет ему использовать энергию света для синтеза сахаров. Этот процесс происходит в хлоропластах и приводит к образованию кислорода и углекислого газа. В результате этого процесса листья становятся зелеными.
```
```
User: Сочини длинный рассказ, обязательно упоминая следующие объекты. Дано: Таня, мяч
Saiga: Татьяна была очень увлеченной девочкой, которая всегда любила играть в футбол. Она была настоящей звездой на поле, и ее команда всегда побеждала благодаря ее таланту и энергии.
Однажды, когда Татьяна играла в футбол со своими друзьями, она случайно потеряла свой мяч. Она была очень расстроена, так как это был ее любимый мяч, который она использовала уже несколько лет.
Но несмотря на то, что она была расстроена, Татьяна решила продолжить игру. Она начала искать свой мяч, но ничего не нашла. В конце концов, она решила вернуться домой и попросить помощи у своих родителей.
Когда Татьяна пришла домой, она рассказала своим родителям о том, что произошло. Они были очень сочувствующими и предложили ей найти новый мяч.
Татьяна была очень благодарна своим родителям за их помощь, и они отправились вместе в магазин спортивного инвентаря. Там они выбрали для нее новый мяч, который был почти такой же, как старый.
После того, как Татьяна получила новый мяч, она вернулась на поле и продолжила играть. Но теперь она знала, что ее любовь к игре не зависит от одного мяча - она могла играть и без него.
Также, Татьяна поняла, что ее родители всегда будут рядом ей, чтобы помочь ей в трудные моменты. И она была очень благодарна им за это.
Таким образом, Татьяна научилась жить без своего старого мяча, и она стала более сильной и уверенной в себе после этого случая. Ее любовь к игре только укрепилась, и она продолжала играть в футбол каждую свободную минуту.
```
```
User: Реши уравнение 4x + 5 = 21
Saiga: Для решения данного уравнения необходимо выполнить следующие шаги:
1. Перенесите все члены в левую часть уравнения на другую сторону равенства:
4x + 5 = 21
2. Отмените оба члена, переведя их в правую часть уравнения:
4x = 21 - 5
3. Разделите оба члена на общее деление:
4x / 4 = (21 - 5) / 4
4x / 4 = 5 - 5/4
4x / 4 = 0
4x = 0
4x = 0 * 4
4x = 0
Ответ: x = 0.
```
```
User: Заполни пропуски в предложении. Дано: Я пытался ____ от маньяка, но он меня настиг
Saiga: Я пытался убежать от маньяка, но он меня настиг.
```
v2:
- revision fe7034727ce126498559df277b08da76001edc16
- wandb [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/e0qzokzl)
- 7 datasets: ru_turbo_alpaca, ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, ru_turbo_alpaca_evol_instruct (1/2 iteration), ru_instruct_gpt4
- Datasets merging script: [create_chat_set.py](https://github.com/IlyaGusev/rulm/blob/e4238fd9a196405b566a2d5838ab44b7a0f4dc31/self_instruct/src/data_processing/create_chat_set.py)
- saiga13b_v2 vs saiga13b_v1: 105-10-61
v1:
- revision bd4823198d443ab0a18be3768c9985de6e249d43
- wandb [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/rue8m51s/overview)
- 4 datasets: ru_turbo_alpaca, ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch
- Datasets merging script: [create_chat_set.py](https://github.com/IlyaGusev/rulm/blob/ef58f3d82d6e7b3784d42167ff69188d3766ab61/self_instruct/src/data_processing/create_chat_set.py)
- Loss: 0.870
- Context length: 2000
- Conversational template: `"<s>{role}\n{content}</s>"`
- Possible roles: `["system", "user", "bot"]`
- System prompt: `"Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им."`
|
IlyaGusev/saiga_30b_lora
|
IlyaGusev
| 2023-07-26T14:03:31Z | 0 | 22 | null |
[
"conversational",
"ru",
"dataset:IlyaGusev/ru_turbo_alpaca",
"dataset:IlyaGusev/ru_turbo_saiga",
"dataset:IlyaGusev/oasst1_ru_main_branch",
"dataset:IlyaGusev/ru_sharegpt_cleaned",
"dataset:IlyaGusev/ru_turbo_alpaca_evol_instruct",
"dataset:lksy/ru_instruct_gpt4",
"license:cc-by-4.0",
"region:us"
] |
text-generation
| 2023-04-17T21:35:03Z |
---
datasets:
- IlyaGusev/ru_turbo_alpaca
- IlyaGusev/ru_turbo_saiga
- IlyaGusev/oasst1_ru_main_branch
- IlyaGusev/ru_sharegpt_cleaned
- IlyaGusev/ru_turbo_alpaca_evol_instruct
- lksy/ru_instruct_gpt4
language:
- ru
pipeline_tag: conversational
license: cc-by-4.0
---
# Saiga 30B, Russian LLaMA-based chatbot
Based on [LLaMA 30B](https://huggingface.co/huggyllama/llama-30b).
This is an adapter-only version.
Colab: [link](https://colab.research.google.com/drive/1zvF0I52NJ7ngAk_zWzEA_QOYOMCdITZN?usp=sharing)
Llama.cpp version: [link](https://huggingface.co/IlyaGusev/saiga_30b_ggml)
Training code: [link](https://github.com/IlyaGusev/rulm/tree/master/self_instruct)
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
MODEL_NAME = "IlyaGusev/saiga_30b_lora"
DEFAULT_MESSAGE_TEMPLATE = "<s>{role}\n{content}</s>\n"
DEFAULT_SYSTEM_PROMPT = "Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им."
class Conversation:
def __init__(
self,
message_template=DEFAULT_MESSAGE_TEMPLATE,
system_prompt=DEFAULT_SYSTEM_PROMPT,
start_token_id=1,
bot_token_id=9225
):
self.message_template = message_template
self.start_token_id = start_token_id
self.bot_token_id = bot_token_id
self.messages = [{
"role": "system",
"content": system_prompt
}]
def get_start_token_id(self):
return self.start_token_id
def get_bot_token_id(self):
return self.bot_token_id
def add_user_message(self, message):
self.messages.append({
"role": "user",
"content": message
})
def add_bot_message(self, message):
self.messages.append({
"role": "bot",
"content": message
})
def get_prompt(self, tokenizer):
final_text = ""
for message in self.messages:
message_text = self.message_template.format(**message)
final_text += message_text
final_text += tokenizer.decode([self.start_token_id, self.bot_token_id])
return final_text.strip()
def generate(model, tokenizer, prompt, generation_config):
data = tokenizer(prompt, return_tensors="pt")
data = {k: v.to(model.device) for k, v in data.items()}
output_ids = model.generate(
**data,
generation_config=generation_config
)[0]
output_ids = output_ids[len(data["input_ids"][0]):]
output = tokenizer.decode(output_ids, skip_special_tokens=True)
return output.strip()
config = PeftConfig.from_pretrained(MODEL_NAME)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
load_in_8bit=True,
torch_dtype=torch.float16,
device_map="auto"
)
model = PeftModel.from_pretrained(
model,
MODEL_NAME,
torch_dtype=torch.float16
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, use_fast=False)
generation_config = GenerationConfig.from_pretrained(MODEL_NAME)
print(generation_config)
inputs = ["Почему трава зеленая?", "Сочини длинный рассказ, обязательно упоминая следующие объекты. Дано: Таня, мяч"]
for inp in inputs:
conversation = Conversation()
conversation.add_user_message(inp)
prompt = conversation.get_prompt(tokenizer)
output = generate(model, tokenizer, prompt, generation_config)
print(inp)
print(output)
print()
print("==============================")
print()
```
Examples:
```
User: Почему трава зеленая?
Saiga: Трава зеленая потому, что она содержит хлорофилл - пигмент, который отвечает за зеленый цвет листьев растений. Хлорофилл используется для фотосинтеза - процесса преобразования света в энергию, необходимую для роста и развития растения. Кроме того, зеленый цвет является наиболее распространенным цветом среди всех растительных видов на Земле, поэтому большинство трав имеют зеленый цвет.
```
```
User: Сочини длинный рассказ, обязательно упоминая следующие объекты. Дано: Таня, мяч
Saiga: Однажды летом, когда все дети уже вышли из школы и начали проводить время на улице, Таня решила поиграть в футбол со своими друзьями. Она взяла свой любимый мяч и отправилась к парку, где они обычно играют.
Когда она пришла туда, ее друзья уже были готовы к игре. Они разделились на две команды и начали играть. Таня была очень энергичной и активной, она бежала по полю, прыгала и ловила мяч.
Но однажды, когда она пыталась заблокировать удар соперника, мяч ударился ей прямо в лицо. Она упала на землю и не могла двигаться. Ее друзья сразу же подобрали ее и провели медицинскую помощь.
Так как Тане было больно, они решили не продолжать игру и вернуться домой. Но Таня не хотела пропустить возможность играть в футбол, поэтому она решила найти другой способ.
Она купила себе новый мяч и начала тренироваться дома. Она старалась делать упражнения каждый день, чтобы улучшить свою технику и скорость. Вскоре она стала лучшей футболисткой в своей команде.
Ее друзья были поражены ее умением играть в футбол и часто приглашали ее на матчи. Таня была очень горда своей новой профессией и никогда не переставала тренироваться.
В конце концов, благодаря своим усилиям, она стала настоящей звездой футбола и даже получила предложение от известного клуба. Она приняла его и начала играть на профессиональном уровне.
Таким образом, мяч стал для Тани не только игрушкой, но и источником ее успеха и счастья.
```
```
User: Реши уравнение 4x + 5 = 21
Saiga: Для решения этого уравнения нужно найти значение переменной x. Для этого нужно вычесть из левой части уравнения 5, чтобы получить 4x = 16. Далее, делим обе стороны на 4, чтобы получить x = 4. Таким образом, x равно 4.
```
```
User: Заполни пропуски в предложении. Дано: Я пытался ____ от маньяка, но он меня настиг
Saiga: Я пытался убежать от маньяка, но он меня настиг.
```
v2:
- revision c7a77373a922758317582d55474a7c34478669e2
- wandb [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/3r0nzy6q?wbreferrer=run-alert)
- 7 datasets: ru_turbo_alpaca, ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, ru_turbo_alpaca_evol_instruct (1/2 iterations), ru_instruct_gpt4
- Datasets merging script: [create_chat_set.py](https://github.com/IlyaGusev/rulm/blob/e4238fd9a196405b566a2d5838ab44b7a0f4dc31/self_instruct/src/data_processing/create_chat_set.py)
- saiga30b_v2 vs saiga30b_v1: 86-19-71
v1:
- revision 8573592257aa6476e8179f09a922aff513592886
- wandb [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/x2c19kfk)
- 4 datasets: ru_turbo_alpaca, ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch
- Datasets merging script: [create_chat_set.py](https://github.com/IlyaGusev/rulm/blob/ef58f3d82d6e7b3784d42167ff69188d3766ab61/self_instruct/src/data_processing/create_chat_set.py)
- Loss: 0.817
- Context length: 2000
- Conversational template: `"<s>{role}\n{content}</s>"`
- Possible roles: `["system", "user", "bot"]`
- System prompt: `"Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им."`
|
MauriceV2021/AuroraSDGsModel
|
MauriceV2021
| 2023-07-26T13:56:22Z | 0 | 0 |
keras
|
[
"keras",
"text-classification",
"en",
"nl",
"de",
"fr",
"it",
"is",
"cs",
"da",
"es",
"ca",
"license:cc-by-4.0",
"region:us"
] |
text-classification
| 2022-05-06T11:20:03Z |
---
license: cc-by-4.0
language:
- en
- nl
- de
- fr
- it
- is
- cs
- da
- es
- ca
metrics:
- accuracy
- matthews_correlation
pipeline_tag: text-classification
library_name: keras
---
# Aurora SDG Multi-Label Multi-Class Model
<!-- Provide a quick summary of what the model is/does. -->
This model is able to classify texts related to United Nations sustainable development goals (SDG) in multiple languages.

Source: https://sdgs.un.org/goals
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This text classification model was developed by fine-tuning the bert-base-uncased pre-trained model. The training data for this fine-tuned model was sourced from the publicly available OSDG Community Dataset (OSDG-CD) at https://zenodo.org/record/5550238#.ZBulfcJByF4.
This model was made as part of academic research at Deakin University. The goal was to make a transformer-based SDG text classification model that anyone could use. Only the first 16 UN SDGs supported. The primary model details are highlighted below:
- **Model type:** Text classification
- **Language(s) (NLP):** English, Dutch, German, Icelandic, French, Czeck, Italian, Danisch, Spanish, Catalan
- **License:** cc-by-4.0
- **Finetuned from model [optional]:** bert-base-multilingual-uncased
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** option 1: https://huggingface.co/MauriceV2021/AuroraSDGsModel ; option 2 https://doi.org/10.5281/zenodo.7304546
- **Demo [optional]:** option 1: https://huggingface.co/spaces/MauriceV2021/SDGclassifier ; option 2: https://aurora-universities.eu/sdg-research/classify/
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This is a fine-tuned model and therefore requires no further training.
## How to Get Started with the Model
Use the code here to get started with the model: https://github.com/Aurora-Network-Global/sdgs_many_berts
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The training data includes text from 1.4 titles and abstracts of academic research papers, labeled with SDG Goals and Targets, according to an initial validated query.
See training data here: https://doi.org/10.5281/zenodo.5205672
### Evaluation of the Training data
- Avg_precision = 0.70
- Avg_recall = 0.15
Data evaluated by 244 domain expert senior researchers.
See evaluation report on the training data here: https://doi.org/10.5281/zenodo.4917107
## Training Hyperparameters
<!--
- Num_epoch = 3
- Learning rate = 5e-5
- Batch size = 16
-->
## Evaluation
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
- Accuracy = 0.9
- Matthews correlation = 0.89
See evaluation report on the model here: https://doi.org/10.5281/zenodo.5603019
## Citation
Sadick, A.M. (2023). SDG classification with BERT. https://huggingface.co/sadickam/sdg-classification-bert
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
<!--## Model Card Contact -->
|
IlyaGusev/saiga_7b_ggml
|
IlyaGusev
| 2023-07-26T13:40:27Z | 0 | 16 | null |
[
"conversational",
"ru",
"dataset:IlyaGusev/ru_turbo_alpaca",
"dataset:IlyaGusev/ru_turbo_saiga",
"dataset:IlyaGusev/ru_sharegpt_cleaned",
"region:us"
] |
text-generation
| 2023-05-15T20:06:05Z |
---
datasets:
- IlyaGusev/ru_turbo_alpaca
- IlyaGusev/ru_turbo_saiga
- IlyaGusev/ru_sharegpt_cleaned
language:
- ru
inference: false
pipeline_tag: conversational
---
Llama.cpp compatible versions of an original [7B model](https://huggingface.co/IlyaGusev/saiga_7b_lora).
* Download one of the versions, for example `ggml-model-q4_1.bin`.
* Download [interact_llamacpp.py](https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/interact_llamacpp.py)
How to run:
```
sudo apt-get install git-lfs
pip install llama-cpp-python fire
python3 interact_llamacpp.py ggml-model-q4_1.bin
```
System requirements:
* 10GB RAM
|
ailabturkiye/YavuzCetin
|
ailabturkiye
| 2023-07-26T13:37:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-26T13:33:49Z |
Yavuz Cetin RVC 290 Epoch
Dataset ve Trainler Teurzein Tarafından yapılmıştır.
|
sukiee/qlora-koalpaca-polyglot-5.8b-hotissue_v2
|
sukiee
| 2023-07-26T13:15:08Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-25T17:02:21Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
HatsiIvaMiku/HatsiIvaRVC
|
HatsiIvaMiku
| 2023-07-26T13:08:29Z | 0 | 0 | null |
[
"en",
"ja",
"license:openrail",
"region:us"
] | null | 2023-07-26T13:07:21Z |
---
license: openrail
language:
- en
- ja
---
|
mgigena/roberta-large-cuad
|
mgigena
| 2023-07-26T12:48:33Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"legal-contract-review",
"cuad",
"en",
"dataset:cuad",
"arxiv:2103.06268",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-26T12:38:58Z |
---
language:
- en
license: cc-by-4.0
datasets:
- cuad
pipeline_tag: question-answering
tags:
- legal-contract-review
- roberta
- cuad
library_name: transformers
---
# Model Card for roberta-large-cuad
# Model Details
## Model Description
- **Developed by:** Hendrycks et al.
- **Model type:** Question Answering
- **Language(s) (NLP):** en
- **License:** cc-by-4.0
- **Related Models:**
- **Parent Model:** RoBERTa
- **Resources for more information:**
- GitHub Repo: [TheAtticusProject](https://github.com/TheAtticusProject/cuad)
- Associated Paper: [CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review](https://arxiv.org/abs/2103.06268)
- Project website: [Contract Understanding Atticus Dataset (CUAD)](https://www.atticusprojectai.org/cuad)
# Uses
## Direct Use
This model can be used for the task of Question Answering on Legal Documents.
# Training Details
Read: [CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review](https://arxiv.org/abs/2103.06268)
for detailed information on training procedure, dataset preprocessing and evaluation.
## Training Data, Procedure, Preprocessing, etc.
See [CUAD dataset card](https://huggingface.co/datasets/cuad) for more information.
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
See [CUAD dataset card](https://huggingface.co/datasets/cuad) for more information.
### Software
Python, Transformers
# Citation
**BibTeX:**
```
@article{hendrycks2021cuad,
title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review},
author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball},
journal={NeurIPS},
year={2021}
}
```
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("mgigena/roberta-large-cuad")
model = AutoModelForQuestionAnswering.from_pretrained("mgigena/roberta-large-cuad")
```
</details>
|
Jonathaniu/llama2-breast-cancer-7b-knowledge-epoch-5
|
Jonathaniu
| 2023-07-26T12:25:16Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-26T12:25:02Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
### Framework versions
- PEFT 0.4.0.dev0
|
liuyt75/t5-base_prefix_tuning_sentences_75agree_5
|
liuyt75
| 2023-07-26T12:25:03Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-26T08:03:11Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.