modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-27 00:39:58
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-27 00:39:49
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
johaanm/test-planner-alpha-V8.5
|
johaanm
| 2023-09-17T18:21:52Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-17T18:21:48Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
nalnnzph/ppo-Huggy
|
nalnnzph
| 2023-09-17T18:20:44Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-09-17T18:20:39Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: nalnnzph/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CyberHarem/aiba_yumi_idolmastercinderellagirls
|
CyberHarem
| 2023-09-17T18:19:58Z | 0 | 1 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/aiba_yumi_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-17T17:57:51Z |
---
license: mit
datasets:
- CyberHarem/aiba_yumi_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of aiba_yumi_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 7020, you need to download `7020/aiba_yumi_idolmastercinderellagirls.pt` as the embedding and `7020/aiba_yumi_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 7020**, with the score of 0.950. The trigger words are:
1. `aiba_yumi_idolmastercinderellagirls`
2. `short_hair, brown_eyes, blush, smile, blonde_hair, bangs, breasts, open_mouth, collarbone, brown_hair, medium_breasts`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------------|:-----------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 8100 | 0.911 | [Download](8100/aiba_yumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8100/previews/pattern_10.png) | [<NSFW, click to see>](8100/previews/pattern_11.png) |  |  | [<NSFW, click to see>](8100/previews/pattern_14.png) |  |  | [<NSFW, click to see>](8100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8100/previews/nude.png) | [<NSFW, click to see>](8100/previews/nude2.png) |  |  |
| 7560 | 0.935 | [Download](7560/aiba_yumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7560/previews/pattern_10.png) | [<NSFW, click to see>](7560/previews/pattern_11.png) |  |  | [<NSFW, click to see>](7560/previews/pattern_14.png) |  |  | [<NSFW, click to see>](7560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7560/previews/nude.png) | [<NSFW, click to see>](7560/previews/nude2.png) |  |  |
| **7020** | **0.950** | [**Download**](7020/aiba_yumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7020/previews/pattern_10.png) | [<NSFW, click to see>](7020/previews/pattern_11.png) |  |  | [<NSFW, click to see>](7020/previews/pattern_14.png) |  |  | [<NSFW, click to see>](7020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7020/previews/nude.png) | [<NSFW, click to see>](7020/previews/nude2.png) |  |  |
| 6480 | 0.930 | [Download](6480/aiba_yumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6480/previews/pattern_10.png) | [<NSFW, click to see>](6480/previews/pattern_11.png) |  |  | [<NSFW, click to see>](6480/previews/pattern_14.png) |  |  | [<NSFW, click to see>](6480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6480/previews/nude.png) | [<NSFW, click to see>](6480/previews/nude2.png) |  |  |
| 5940 | 0.938 | [Download](5940/aiba_yumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5940/previews/pattern_10.png) | [<NSFW, click to see>](5940/previews/pattern_11.png) |  |  | [<NSFW, click to see>](5940/previews/pattern_14.png) |  |  | [<NSFW, click to see>](5940/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5940/previews/nude.png) | [<NSFW, click to see>](5940/previews/nude2.png) |  |  |
| 5400 | 0.934 | [Download](5400/aiba_yumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5400/previews/pattern_10.png) | [<NSFW, click to see>](5400/previews/pattern_11.png) |  |  | [<NSFW, click to see>](5400/previews/pattern_14.png) |  |  | [<NSFW, click to see>](5400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) |  |  |
| 4860 | 0.929 | [Download](4860/aiba_yumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4860/previews/pattern_10.png) | [<NSFW, click to see>](4860/previews/pattern_11.png) |  |  | [<NSFW, click to see>](4860/previews/pattern_14.png) |  |  | [<NSFW, click to see>](4860/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4860/previews/nude.png) | [<NSFW, click to see>](4860/previews/nude2.png) |  |  |
| 4320 | 0.942 | [Download](4320/aiba_yumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4320/previews/pattern_10.png) | [<NSFW, click to see>](4320/previews/pattern_11.png) |  |  | [<NSFW, click to see>](4320/previews/pattern_14.png) |  |  | [<NSFW, click to see>](4320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3780 | 0.896 | [Download](3780/aiba_yumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3780/previews/pattern_10.png) | [<NSFW, click to see>](3780/previews/pattern_11.png) |  |  | [<NSFW, click to see>](3780/previews/pattern_14.png) |  |  | [<NSFW, click to see>](3780/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3780/previews/nude.png) | [<NSFW, click to see>](3780/previews/nude2.png) |  |  |
| 3240 | 0.903 | [Download](3240/aiba_yumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3240/previews/pattern_10.png) | [<NSFW, click to see>](3240/previews/pattern_11.png) |  |  | [<NSFW, click to see>](3240/previews/pattern_14.png) |  |  | [<NSFW, click to see>](3240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3240/previews/nude.png) | [<NSFW, click to see>](3240/previews/nude2.png) |  |  |
| 2700 | 0.868 | [Download](2700/aiba_yumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2700/previews/pattern_10.png) | [<NSFW, click to see>](2700/previews/pattern_11.png) |  |  | [<NSFW, click to see>](2700/previews/pattern_14.png) |  |  | [<NSFW, click to see>](2700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2700/previews/nude.png) | [<NSFW, click to see>](2700/previews/nude2.png) |  |  |
| 2160 | 0.882 | [Download](2160/aiba_yumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2160/previews/pattern_10.png) | [<NSFW, click to see>](2160/previews/pattern_11.png) |  |  | [<NSFW, click to see>](2160/previews/pattern_14.png) |  |  | [<NSFW, click to see>](2160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2160/previews/nude.png) | [<NSFW, click to see>](2160/previews/nude2.png) |  |  |
| 1620 | 0.899 | [Download](1620/aiba_yumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1620/previews/pattern_10.png) | [<NSFW, click to see>](1620/previews/pattern_11.png) |  |  | [<NSFW, click to see>](1620/previews/pattern_14.png) |  |  | [<NSFW, click to see>](1620/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1620/previews/nude.png) | [<NSFW, click to see>](1620/previews/nude2.png) |  |  |
| 1080 | 0.917 | [Download](1080/aiba_yumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1080/previews/pattern_10.png) | [<NSFW, click to see>](1080/previews/pattern_11.png) |  |  | [<NSFW, click to see>](1080/previews/pattern_14.png) |  |  | [<NSFW, click to see>](1080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1080/previews/nude.png) | [<NSFW, click to see>](1080/previews/nude2.png) |  |  |
| 540 | 0.763 | [Download](540/aiba_yumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](540/previews/pattern_10.png) | [<NSFW, click to see>](540/previews/pattern_11.png) |  |  | [<NSFW, click to see>](540/previews/pattern_14.png) |  |  | [<NSFW, click to see>](540/previews/bondage.png) |  |  |  | [<NSFW, click to see>](540/previews/nude.png) | [<NSFW, click to see>](540/previews/nude2.png) |  |  |
|
QMB15/mythomax-13B-8.13bit-MAX-exl2
|
QMB15
| 2023-09-17T18:19:37Z | 8 | 5 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-17T17:47:29Z |
---
license: other
language:
- en
---
This is an exllama V2 quantization of https://huggingface.co/Gryphe/MythoMax-L2-13b
This particular version is designed for maximum quality at the cost of size.
I noticed that the previous 8bpw version was using a small bitrate for some layers, and reported a lower quantized ppl than its base ppl, implying that the layer optimizer was overfitting to the dataset.
In response, I edited measurement.json to add +1 error to all bitrates except for 8.13 (the max).
(Don't reuse that file for other quants!!)
That means this version uses the best 8bit-32g quantization mode for all layers. In out of sample tests, this squeezes out just a little better perplexity than the 8bit version.
Calibration data: https://huggingface.co/datasets/wikitext/resolve/refs%2Fconvert%2Fparquet/wikitext-2-v1/test/0000.parquet
An improved, potentially even perfected variant of MythoMix, my [MythoLogic-L2](https://huggingface.co/Gryphe/MythoLogic-L2-13b) and [Huginn](https://huggingface.co/The-Face-Of-Goonery/Huginn-13b-FP16) merge using a highly experimental tensor type merge technique. The main difference with MythoMix is that I allowed more of Huginn to intermingle with the single tensors located at the front and end of a model, resulting in increased coherency across the entire structure.
The script and the acccompanying templates I used to produce both can [be found here](https://github.com/Gryphe/BlockMerge_Gradient/tree/main/YAML).
This model is proficient at both roleplaying and storywriting due to its unique nature.
Quantized models are available from TheBloke: [GGML](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGML) - [GPTQ](https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ) (You're the best!)
## Model details
The idea behind this merge is that each layer is composed of several tensors, which are in turn responsible for specific functions. Using MythoLogic-L2's robust understanding as its input and Huginn's extensive writing capability as its output seems to have resulted in a model that exceeds at both, confirming my theory. (More details to be released at a later time)
This type of merge is incapable of being illustrated, as each of its 363 tensors had an unique ratio applied to it. As with my prior merges, gradients were part of these ratios to further finetune its behaviour.
## Prompt Format
This model primarily uses Alpaca formatting, so for optimal model performance, use:
```
<System prompt/Character Card>
### Instruction:
Your instruction or question here.
For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.
### Response:
```
---
license: other
---
|
reallygoodtechdeals/autotrain-lane-center-8-89748143997
|
reallygoodtechdeals
| 2023-09-17T18:05:35Z | 201 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"resnet",
"image-classification",
"autotrain",
"vision",
"dataset:reallygoodtechdeals/autotrain-data-lane-center-8",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-17T18:03:59Z |
---
tags:
- autotrain
- vision
- image-classification
datasets:
- reallygoodtechdeals/autotrain-data-lane-center-8
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.49428603121272385
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 89748143997
- CO2 Emissions (in grams): 0.4943
## Validation Metrics
- Loss: 0.693
- Accuracy: 0.523
- Precision: 0.417
- Recall: 0.263
- AUC: 0.371
- F1: 0.323
|
badokorach/mobilebert-uncased-squad-v2-qa
|
badokorach
| 2023-09-17T17:52:20Z | 123 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mobilebert",
"question-answering",
"generated_from_trainer",
"base_model:badokorach/mobilebert-uncased-squad-v2-qa",
"base_model:finetune:badokorach/mobilebert-uncased-squad-v2-qa",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-09-17T14:54:23Z |
---
license: mit
base_model: badokorach/mobilebert-uncased-squad-v2-qa
tags:
- generated_from_trainer
model-index:
- name: mobilebert-uncased-squad-v2-qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert-uncased-squad-v2-qa
This model is a fine-tuned version of [badokorach/mobilebert-uncased-squad-v2-qa](https://huggingface.co/badokorach/mobilebert-uncased-squad-v2-qa) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 1.0 | 265 | 2.3277 |
| 2.6476 | 2.0 | 530 | 2.1362 |
| 2.6476 | 3.0 | 795 | 1.9784 |
| 2.3668 | 4.0 | 1060 | 1.7966 |
| 2.3668 | 5.0 | 1325 | 1.7200 |
| 2.0871 | 6.0 | 1590 | 1.5585 |
| 2.0871 | 7.0 | 1855 | 1.3859 |
| 1.9018 | 8.0 | 2120 | 1.2941 |
| 1.9018 | 9.0 | 2385 | 1.2245 |
| 1.6963 | 10.0 | 2650 | 1.1069 |
| 1.6963 | 11.0 | 2915 | 0.9504 |
| 1.5186 | 12.0 | 3180 | 0.8660 |
| 1.5186 | 13.0 | 3445 | 0.8664 |
| 1.3707 | 14.0 | 3710 | 0.6955 |
| 1.3707 | 15.0 | 3975 | 0.6217 |
| 1.2402 | 16.0 | 4240 | 0.5880 |
| 1.0937 | 17.0 | 4505 | 0.5604 |
| 1.0937 | 18.0 | 4770 | 0.4484 |
| 0.9468 | 19.0 | 5035 | 0.3988 |
| 0.9468 | 20.0 | 5300 | 0.3981 |
| 0.8648 | 21.0 | 5565 | 0.3145 |
| 0.8648 | 22.0 | 5830 | 0.3053 |
| 0.7644 | 23.0 | 6095 | 0.2580 |
| 0.7644 | 24.0 | 6360 | 0.2741 |
| 0.6697 | 25.0 | 6625 | 0.2122 |
| 0.6697 | 26.0 | 6890 | 0.1946 |
| 0.6188 | 27.0 | 7155 | 0.1915 |
| 0.6188 | 28.0 | 7420 | 0.1550 |
| 0.5341 | 29.0 | 7685 | 0.1430 |
| 0.5341 | 30.0 | 7950 | 0.1287 |
| 0.4874 | 31.0 | 8215 | 0.1250 |
| 0.4874 | 32.0 | 8480 | 0.0994 |
| 0.4516 | 33.0 | 8745 | 0.0955 |
| 0.4164 | 34.0 | 9010 | 0.0890 |
| 0.4164 | 35.0 | 9275 | 0.0838 |
| 0.3864 | 36.0 | 9540 | 0.0796 |
| 0.3864 | 37.0 | 9805 | 0.0766 |
| 0.353 | 38.0 | 10070 | 0.0788 |
| 0.353 | 39.0 | 10335 | 0.0711 |
| 0.3331 | 40.0 | 10600 | 0.0681 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
daochf/Lora-MetaLlama2-7b-chat-hf-PuceDs04-v01
|
daochf
| 2023-09-17T17:49:52Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-17T17:49:28Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
|
chenqile09/chinese-alpaca-2-LoRA-7B-couplet-100k
|
chenqile09
| 2023-09-17T17:48:39Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"dataset:chenqile09/llama2-chinese-couplet-100k",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-17T05:55:23Z |
---
datasets:
- chenqile09/llama2-chinese-couplet-100k
language:
- zh
metrics:
- bertscore
- bleu
library_name: transformers
---
## Chinese-alpaca-2-LoRA-7B-couplet-100k
Finetuned chinese-alpaca-2-7B model via LoRA on the 100k Chinese couplet dataset
- Dataset: [llama2-chinese-couplet-100k](https://huggingface.co/datasets/chenqile09/llama2-chinese-couplet-100k)
- Notebook: [chinese-llama-finetuning-100k](https://github.com/Qile-Paul-Chen/chinese-llama-finetuning-couplet/blob/dev/chinese-llama-finetuning-100k.ipynb)
|
MaxArb/RotEtogoCasino
|
MaxArb
| 2023-09-17T17:45:28Z | 0 | 0 | null |
[
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2023-09-14T18:26:02Z |
---
license: cc-by-nc-nd-4.0
---
|
cdahd/after5-caly-film
|
cdahd
| 2023-09-17T17:24:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-17T17:21:56Z |
# "After 5: cały film cda"
## Opis Filmu
Z dnia na dzień liczba fanów serii "After" rośnie, a jej najnowsza odsłona, "After 5: Na Zawsze," stanowi kulminację emocji i zawirowań, które towarzyszyły głównym bohaterom od początku ich historii. Pod kierownictwem Castille Landon, film zadebiutował 13 września 2023 roku, oferując widzom nową, jeszcze nieodkrytą stronę relacji między Tessą a Hardinem.
## Obsada
Gwiazdy serii, Hero Fiennes-Tiffin i Josephine Langford, wracają w swoich ikonicznych rolach jako Hardin Scott i Tessa Young. U ich boku pojawiają się takie nazwiska jak Stephen Moyer w roli Christiana Vance'a czy Louise Lombard jako Trish.
## Fabuła
Film kontynuuje dramatyczną sagę miłości i zdrady. Tessa i Hardin, już nie będąc parą, próbują znaleźć nowe drogi w życiu. Tessa skupia się na karierze, Hardin zaś na nowym wyzwaniu w Lizbonie. Ale czy nowe miejsce i nowi ludzie mogą zmienić to, co między nimi było? I czy są gotowi na to, co ich czeka?
## Co Nowego?
Jednym z najbardziej interesujących aspektów "After 5: Na Zawsze" jest to, że film nie jest bezpośrednią adaptacją żadnej książki. Jest to świeże podejście, które różni go od poprzednich części serii. Co więcej, film dostępny jest jedynie w kinach, co jest odpowiedzią na regulacje mające na celu ograniczenie piractwa.
## Ciekawostki
- Ostatnia część serii "After"
- Nie jest to adaptacja książkowa, ale samodzielne dzieło
- Premiera filmu jest jedyną opcją dla oglądających z powodu regulacji antypirackich
## Ostateczna Refleksja
Jest to film, który stanowi zamknięcie pewnej epoki dla wielu fanów serii. Tessa i Hardin, zmagając się z nowymi wyzwaniami i możliwościami, muszą odpowiedzieć sobie na pytanie, czy ich miłość przetrwa te wszystkie burze. Film "After 5: Na Zawsze" jest emocjonalną, ale też dramatyczną odsłoną ich historii, która z pewnością zrobi na widzu duże wrażenie.
## Źródło: CdaTube
Informacje użyte do napisania artykułu pochodzą ze strony <a href="https://cdatube.pl/after-5-caly-film/">cdatube.pl</a>
|
garage-bAInd/Platypus-7B-adapters
|
garage-bAInd
| 2023-09-17T17:03:53Z | 0 | 0 | null |
[
"pytorch",
"en",
"dataset:garage-bAInd/Open-Platypus",
"arxiv:2308.07317",
"arxiv:2307.09288",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-09-17T16:58:03Z |
---
license: cc-by-nc-sa-4.0
language:
- en
datasets:
- garage-bAInd/Open-Platypus
---
# Platypus2-7B LoRA adapters
**NOTE**: There is some issue with LLaMa-2 7B and fine-tuning only works if you use `fp16=False` and `bf16=True` in the HF trainer. Gathering more intel on this but if you have any thoughts about this issue or performance, please let us know!
Platypus-7B is an instruction fine-tuned model based on the LLaMA2-7B transformer architecture.

### Benchmark Metrics
| Metric | Value |
|-----------------------|-------|
| MMLU (5-shot) | - |
| ARC (25-shot) | - |
| HellaSwag (10-shot) | - |
| TruthfulQA (0-shot) | - |
| Avg. | - |
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
### Model Details
* **Trained by**: Cole Hunter & Ariel Lee
* **Model type:** **Platypus2-7B** is an auto-regressive language model based on the LLaMA2 transformer architecture.
* **Language(s)**: English
* **License for base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/))
### Prompt Template
```
### Instruction:
<prompt> (without the <>)
### Response:
```
### Training Dataset
`garage-bAInd/Platypus2-7B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information.
### Training Procedure
`garage-bAInd/Platypus2-7B` was instruction fine-tuned using LoRA on 1 A100 80GB. For training details and inference instructions please see the [Platypus2](https://github.com/arielnlee/Platypus) GitHub repo.
### Reproducing Evaluation Results
Install LM Evaluation Harness:
```
# clone repository
git clone https://github.com/EleutherAI/lm-evaluation-harness.git
# check out the correct commit
git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
# change to repo directory
cd lm-evaluation-harness
# install
pip install -e .
```
Each task was evaluated on 1 A100 80GB GPU.
ARC:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-7B,use_accelerate=True,dtype="bfloat16" --tasks arc_challenge --batch_size 2 --no_cache --write_out --output_path results/Platypus2-7B/arc_challenge_25shot.json --device cuda --num_fewshot 25
```
HellaSwag:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-7B,use_accelerate=True,dtype="bfloat16" --tasks hellaswag --batch_size 2 --no_cache --write_out --output_path results/Platypus2-7B/hellaswag_10shot.json --device cuda --num_fewshot 10
```
MMLU:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-7B,use_accelerate=True,dtype="bfloat16" --tasks hendrycksTest-* --batch_size 2 --no_cache --write_out --output_path results/Platypus2-7B/mmlu_5shot.json --device cuda --num_fewshot 5
```
TruthfulQA:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-7B,use_accelerate=True,dtype="bfloat16" --tasks truthfulqa_mc --batch_size 2 --no_cache --write_out --output_path results/Platypus2-7B/truthfulqa_0shot.json --device cuda
```
### Limitations and bias
Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
### Citations
```bibtex
@article{platypus2023,
title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs},
author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz},
booktitle={arXiv preprint arxiv:2308.07317},
year={2023}
}
```
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
}
```
```bibtex
@inproceedings{
hu2022lora,
title={Lo{RA}: Low-Rank Adaptation of Large Language Models},
author={Edward J Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=nZeVKeeFYf9}
}
```
|
JonasSim/TWingshadow_v1.4
|
JonasSim
| 2023-09-17T17:01:47Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-17T16:00:47Z |
---
license: creativeml-openrail-m
---
|
CyberHarem/hisakawa_hayate_idolmastercinderellagirls
|
CyberHarem
| 2023-09-17T17:00:42Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/hisakawa_hayate_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-17T16:40:35Z |
---
license: mit
datasets:
- CyberHarem/hisakawa_hayate_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of hisakawa_hayate_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4320, you need to download `4320/hisakawa_hayate_idolmastercinderellagirls.pt` as the embedding and `4320/hisakawa_hayate_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4320**, with the score of 0.965. The trigger words are:
1. `hisakawa_hayate_idolmastercinderellagirls`
2. `bangs, long_hair, grey_hair, braid, blush, blue_eyes, jewelry, braided_bangs, earrings, smile, breasts, open_mouth, collarbone, very_long_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 8100 | 0.943 | [Download](8100/hisakawa_hayate_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8100/previews/bikini.png) | [<NSFW, click to see>](8100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8100/previews/nude.png) | [<NSFW, click to see>](8100/previews/nude2.png) |  |  |
| 7560 | 0.956 | [Download](7560/hisakawa_hayate_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7560/previews/bikini.png) | [<NSFW, click to see>](7560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7560/previews/nude.png) | [<NSFW, click to see>](7560/previews/nude2.png) |  |  |
| 7020 | 0.965 | [Download](7020/hisakawa_hayate_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7020/previews/bikini.png) | [<NSFW, click to see>](7020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7020/previews/nude.png) | [<NSFW, click to see>](7020/previews/nude2.png) |  |  |
| 6480 | 0.952 | [Download](6480/hisakawa_hayate_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6480/previews/bikini.png) | [<NSFW, click to see>](6480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6480/previews/nude.png) | [<NSFW, click to see>](6480/previews/nude2.png) |  |  |
| 5940 | 0.959 | [Download](5940/hisakawa_hayate_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5940/previews/bikini.png) | [<NSFW, click to see>](5940/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5940/previews/nude.png) | [<NSFW, click to see>](5940/previews/nude2.png) |  |  |
| 5400 | 0.961 | [Download](5400/hisakawa_hayate_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5400/previews/bikini.png) | [<NSFW, click to see>](5400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) |  |  |
| 4860 | 0.959 | [Download](4860/hisakawa_hayate_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4860/previews/bikini.png) | [<NSFW, click to see>](4860/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4860/previews/nude.png) | [<NSFW, click to see>](4860/previews/nude2.png) |  |  |
| **4320** | **0.965** | [**Download**](4320/hisakawa_hayate_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4320/previews/bikini.png) | [<NSFW, click to see>](4320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3780 | 0.950 | [Download](3780/hisakawa_hayate_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3780/previews/bikini.png) | [<NSFW, click to see>](3780/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3780/previews/nude.png) | [<NSFW, click to see>](3780/previews/nude2.png) |  |  |
| 3240 | 0.950 | [Download](3240/hisakawa_hayate_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3240/previews/bikini.png) | [<NSFW, click to see>](3240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3240/previews/nude.png) | [<NSFW, click to see>](3240/previews/nude2.png) |  |  |
| 2700 | 0.944 | [Download](2700/hisakawa_hayate_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2700/previews/bikini.png) | [<NSFW, click to see>](2700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2700/previews/nude.png) | [<NSFW, click to see>](2700/previews/nude2.png) |  |  |
| 2160 | 0.958 | [Download](2160/hisakawa_hayate_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2160/previews/bikini.png) | [<NSFW, click to see>](2160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2160/previews/nude.png) | [<NSFW, click to see>](2160/previews/nude2.png) |  |  |
| 1620 | 0.969 | [Download](1620/hisakawa_hayate_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1620/previews/bikini.png) | [<NSFW, click to see>](1620/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1620/previews/nude.png) | [<NSFW, click to see>](1620/previews/nude2.png) |  |  |
| 1080 | 0.897 | [Download](1080/hisakawa_hayate_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1080/previews/bikini.png) | [<NSFW, click to see>](1080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1080/previews/nude.png) | [<NSFW, click to see>](1080/previews/nude2.png) |  |  |
| 540 | 0.937 | [Download](540/hisakawa_hayate_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](540/previews/bikini.png) | [<NSFW, click to see>](540/previews/bondage.png) |  |  |  | [<NSFW, click to see>](540/previews/nude.png) | [<NSFW, click to see>](540/previews/nude2.png) |  |  |
|
Stoemb/phi-1_5-finetuned-html_2_text
|
Stoemb
| 2023-09-17T17:00:20Z | 59 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mixformer-sequential",
"text-generation",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:other",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-09-17T16:49:09Z |
---
license: other
base_model: microsoft/phi-1_5
tags:
- generated_from_trainer
model-index:
- name: phi-1_5-finetuned-html_2_text
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-finetuned-html_2_text
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 5000
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Atulit23/meta-llama-indian-constitution
|
Atulit23
| 2023-09-17T16:50:09Z | 44 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-17T16:36:06Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: meta-llama-indian-constitution
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meta-llama-indian-constitution
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
aviroes/MAScIR_elderly_whisper-medium-LoRA-data-augmented
|
aviroes
| 2023-09-17T16:47:54Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"region:us"
] | null | 2023-09-17T11:58:42Z |
---
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
model-index:
- name: MAScIR_elderly_whisper-medium-LoRA-data-augmented
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MAScIR_elderly_whisper-medium-LoRA-data-augmented
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8119 | 0.09 | 100 | 0.2422 |
| 0.7907 | 0.19 | 200 | 0.2357 |
| 0.6762 | 0.28 | 300 | 0.2311 |
| 0.7081 | 0.38 | 400 | 0.2256 |
| 0.5623 | 0.47 | 500 | 0.1946 |
| 0.569 | 0.57 | 600 | 0.1697 |
| 9.0833 | 0.66 | 700 | 8.1242 |
| 6.1681 | 0.76 | 800 | 5.9288 |
| 5.5565 | 0.85 | 900 | 4.9360 |
| 2.0714 | 0.95 | 1000 | 0.2584 |
| 0.6051 | 1.04 | 1100 | 0.2062 |
| 0.485 | 1.14 | 1200 | 0.1824 |
| 0.637 | 1.23 | 1300 | 0.1522 |
| 0.5521 | 1.33 | 1400 | 0.1371 |
| 0.3999 | 1.42 | 1500 | 0.1331 |
| 0.4788 | 1.52 | 1600 | 0.1344 |
| 0.3738 | 1.61 | 1700 | 0.0952 |
| 0.3046 | 1.71 | 1800 | 0.0871 |
| 0.4335 | 1.8 | 1900 | 0.0770 |
| 0.3876 | 1.9 | 2000 | 0.0654 |
| 0.4226 | 1.99 | 2100 | 0.0638 |
| 0.2651 | 2.09 | 2200 | 0.0612 |
| 0.2075 | 2.18 | 2300 | 0.0541 |
| 0.2464 | 2.28 | 2400 | 0.0473 |
| 0.1797 | 2.37 | 2500 | 0.0482 |
| 0.2393 | 2.47 | 2600 | 0.0428 |
| 0.1764 | 2.56 | 2700 | 0.0396 |
| 0.1398 | 2.66 | 2800 | 0.0390 |
| 0.1855 | 2.75 | 2900 | 0.0382 |
| 0.232 | 2.85 | 3000 | 0.0369 |
| 0.2 | 2.94 | 3100 | 0.0358 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
fetiska/Dum-E-PandaReachDense-v3
|
fetiska
| 2023-09-17T16:29:30Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-17T16:23:54Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.22 +/- 0.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
vasaicrow/distilbert-base-uncased-finetuned-imdb
|
vasaicrow
| 2023-09-17T16:29:02Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-17T16:07:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7087 | 1.0 | 157 | 2.4899 |
| 2.5798 | 2.0 | 314 | 2.4231 |
| 2.5271 | 3.0 | 471 | 2.4356 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
asas-ai/nllb-200-distilled-600M-finetuned_augmented_synthetic_ar-to-en
|
asas-ai
| 2023-09-17T16:26:14Z | 17 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"translation",
"base_model:facebook/nllb-200-distilled-600M",
"base_model:finetune:facebook/nllb-200-distilled-600M",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-08-19T14:28:14Z |
---
license: cc-by-nc-4.0
base_model: facebook/nllb-200-distilled-600M
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: nllb-200-distilled-600M-finetuned_augmented_synthetic_ar-to-en
results: []
pipeline_tag: translation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb-200-distilled-600M-finetuned_augmented_synthetic_ar-to-en
This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7501
- Bleu: 62.4193
- Gen Len: 64.586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.0564 | 1.0 | 2210 | 1.0374 | 45.431 | 65.406 |
| 0.8998 | 2.0 | 4420 | 0.8975 | 52.6173 | 66.014 |
| 0.7972 | 3.0 | 6630 | 0.8399 | 55.9624 | 65.357 |
| 0.7451 | 4.0 | 8840 | 0.8021 | 57.3958 | 65.94 |
| 0.6884 | 5.0 | 11050 | 0.7771 | 59.9589 | 65.367 |
| 0.6742 | 6.0 | 13260 | 0.7648 | 61.0786 | 64.74 |
| 0.6599 | 7.0 | 15470 | 0.7562 | 61.9442 | 64.694 |
| 0.6168 | 8.0 | 17680 | 0.7530 | 62.0067 | 64.965 |
| 0.6234 | 9.0 | 19890 | 0.7502 | 62.0721 | 64.888 |
| 0.5948 | 10.0 | 22100 | 0.7501 | 62.4193 | 64.586 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.13.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
asas-ai/opus-mt-ar-en-finetuned_augmented_MT-ar-to-en
|
asas-ai
| 2023-09-17T16:25:17Z | 125 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"translation",
"base_model:Helsinki-NLP/opus-mt-ar-en",
"base_model:finetune:Helsinki-NLP/opus-mt-ar-en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-08-15T16:43:53Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-ar-en
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-ar-en-finetuned_augmented_MTback-ar-to-en
results: []
pipeline_tag: translation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ar-en-finetuned_augmented_MTback-ar-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8227
- Bleu: 66.3415
- Gen Len: 59.569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 2.3965 | 1.0 | 1098 | 1.1276 | 52.6355 | 62.191 |
| 2.0086 | 2.0 | 2196 | 0.9785 | 58.7217 | 61.399 |
| 1.7825 | 3.0 | 3294 | 0.9172 | 61.1046 | 61.316 |
| 1.6434 | 4.0 | 4392 | 0.8788 | 63.501 | 60.232 |
| 1.5295 | 5.0 | 5490 | 0.8571 | 64.7425 | 59.709 |
| 1.4316 | 6.0 | 6588 | 0.8419 | 65.7013 | 59.381 |
| 1.3766 | 7.0 | 7686 | 0.8315 | 65.9805 | 59.585 |
| 1.3241 | 8.0 | 8784 | 0.8254 | 66.2432 | 59.516 |
| 1.2965 | 9.0 | 9882 | 0.8238 | 66.2241 | 59.604 |
| 1.2877 | 10.0 | 10980 | 0.8227 | 66.3415 | 59.569 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
asas-ai/opus-mt-ar-en-finetuned_augmented_synthetic-ar-to-en
|
asas-ai
| 2023-09-17T16:24:06Z | 103 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"translation",
"base_model:Helsinki-NLP/opus-mt-ar-en",
"base_model:finetune:Helsinki-NLP/opus-mt-ar-en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-08-17T15:36:11Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-ar-en
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-ar-en-finetuned_augmented_synthetic-ar-to-en
results: []
pipeline_tag: translation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ar-en-finetuned_augmented_synthetic-ar-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8682
- Bleu: 63.4498
- Gen Len: 59.457
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.9549 | 1.0 | 1105 | 1.2644 | 43.0637 | 61.33 |
| 0.7674 | 2.0 | 2210 | 1.0862 | 51.6055 | 60.714 |
| 0.6736 | 3.0 | 3315 | 0.9910 | 56.1642 | 60.434 |
| 0.6011 | 4.0 | 4420 | 0.9463 | 59.6059 | 59.682 |
| 0.5543 | 5.0 | 5525 | 0.9158 | 61.101 | 59.493 |
| 0.5176 | 6.0 | 6630 | 0.8961 | 61.9065 | 59.468 |
| 0.4849 | 7.0 | 7735 | 0.8840 | 62.6833 | 59.5 |
| 0.4692 | 8.0 | 8840 | 0.8727 | 63.0766 | 59.425 |
| 0.464 | 9.0 | 9945 | 0.8709 | 63.3354 | 59.454 |
| 0.4486 | 10.0 | 11050 | 0.8682 | 63.4498 | 59.457 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
semaj83/speecht5_finetuned_multilingual_librispeech_de
|
semaj83
| 2023-09-17T16:23:40Z | 85 | 0 |
transformers
|
[
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"de",
"dataset:facebook/multilingual_librispeech",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-16T23:24:59Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_multilingual_librispeech_de
results: []
datasets:
- facebook/multilingual_librispeech
language:
- de
pipeline_tag: text-to-speech
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_multilingual_librispeech_de
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4472 | 76.92 | 1000 | 0.4305 |
| 0.4181 | 153.85 | 2000 | 0.4299 |
| 0.4138 | 230.77 | 3000 | 0.4353 |
| 0.4163 | 307.69 | 4000 | 0.4373 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
zjoe/a2c-PandaPickAndPlace-v3
|
zjoe
| 2023-09-17T16:23:18Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-17T16:17:50Z |
---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -50.00 +/- 0.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ashishkat/adalora
|
ashishkat
| 2023-09-17T16:22:09Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-30T11:37:42Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
Powidl43/p4a-mandala
|
Powidl43
| 2023-09-17T16:14:35Z | 0 | 1 | null |
[
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-09-17T15:13:32Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
---
# P4A Mandala
trained with kohya_ss (edg settings)
pinkal09 dataset deviantart.com/pinkal09
trigger "p4a psychedelic" + EasyNegative
huggingface.co/LibreSD/Various/resolve/main/EasyNegative.safetensors
huggingface.co/LibreSD/Various/resolve/main/EasyNegativeV2.safetensors
samples civitai.com/models/147191/p4a-mandala
---
# Merge Info
step1_hard = [p4a-step2](https://huggingface.co/Powidl43/psychedelic/tree/main/step2-merge) 0.6 + pinkal09 0.4
step2_hard_a = step1_hard-camelliamix_v3 0.6 + step1_hard-greymix_v2 0.4
step2_hard_b = step1_hard-counterfeit_v3 0.6 + step1_hard-nabimix_v2 0.4
p4a_mandala_v1_hard_a = step2_hard_a 0.6 + step2_hard_b 0.4
p4a_mandala_v1_hard_b = step2_hard_b 0.6 + step2_hard_a 0.4
step1_soft = [p4a-step2](https://huggingface.co/Powidl43/psychedelic/tree/main/step2-merge) 0.8 + pinkal09 0.2
step2_soft_a = step1_soft-camelliamix_v3 0.6 + step1_soft-greymix_v2 0.4
step2_soft_b = step1_soft-counterfeit_v3 0.6 + step1_soft-nabimix_v2 0.4
p4a_mandala_v1_soft_a = step2_soft_a 0.6 + step2_soft_b 0.4
p4a_mandala_v1_soft_b = step2_soft_b 0.6 + step2_soft_a 0.4
---
base models and other essentials huggingface.co/LibreSD
|
Osmond141319/Tameheadmix
|
Osmond141319
| 2023-09-17T16:12:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-10T04:46:42Z |
https://civitai.com/models/27707?modelVersionId=161189
|
ys7yoo/nli_sts_roberta_large_lr1e-05_wd1e-03_ep10_lr1e-05_wd1e-03_ep10_ckpt
|
ys7yoo
| 2023-09-17T16:10:54Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"base_model:ys7yoo/sts_roberta_large_lr1e-05_wd1e-03_ep10",
"base_model:finetune:ys7yoo/sts_roberta_large_lr1e-05_wd1e-03_ep10",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-17T15:25:51Z |
---
base_model: ys7yoo/sts_roberta_large_lr1e-05_wd1e-03_ep10
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- accuracy
- f1
model-index:
- name: nli_sts_roberta_large_lr1e-05_wd1e-03_ep10_lr1e-05_wd1e-03_ep10_ckpt
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
config: nli
split: validation
args: nli
metrics:
- name: Accuracy
type: accuracy
value: 0.8963333333333333
- name: F1
type: f1
value: 0.8962457758881018
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nli_sts_roberta_large_lr1e-05_wd1e-03_ep10_lr1e-05_wd1e-03_ep10_ckpt
This model is a fine-tuned version of [ys7yoo/sts_roberta_large_lr1e-05_wd1e-03_ep10](https://huggingface.co/ys7yoo/sts_roberta_large_lr1e-05_wd1e-03_ep10) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6903
- Accuracy: 0.8963
- F1: 0.8962
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6445 | 1.0 | 391 | 0.4254 | 0.852 | 0.8512 |
| 0.2943 | 2.0 | 782 | 0.3371 | 0.889 | 0.8886 |
| 0.1586 | 3.0 | 1173 | 0.3704 | 0.888 | 0.8881 |
| 0.0921 | 4.0 | 1564 | 0.4429 | 0.892 | 0.8919 |
| 0.0565 | 5.0 | 1955 | 0.4864 | 0.899 | 0.8989 |
| 0.0378 | 6.0 | 2346 | 0.5727 | 0.8963 | 0.8962 |
| 0.0238 | 7.0 | 2737 | 0.6247 | 0.8957 | 0.8955 |
| 0.016 | 8.0 | 3128 | 0.6578 | 0.8947 | 0.8945 |
| 0.0101 | 9.0 | 3519 | 0.6780 | 0.8953 | 0.8952 |
| 0.0067 | 10.0 | 3910 | 0.6903 | 0.8963 | 0.8962 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
NabeelShar/emotion-dectect
|
NabeelShar
| 2023-09-17T16:05:13Z | 220 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vit",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-13T18:11:40Z |
---
license: apache-2.0
### Just Another Project
|
wzneric/df_wm_id1
|
wzneric
| 2023-09-17T15:50:11Z | 7 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-09-17T15:01:24Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks Tshirt
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - wzneric/df_wm_id1
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks Tshirt using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
ihsansatriawan/image_classification
|
ihsansatriawan
| 2023-09-17T15:42:27Z | 214 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-15T20:40:12Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.55625
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2908
- Accuracy: 0.5563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00018
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 1.2380 | 0.5062 |
| No log | 2.0 | 40 | 1.1930 | 0.6 |
| No log | 3.0 | 60 | 1.2037 | 0.5687 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
guydebruyn/ppo-SnowballTarget
|
guydebruyn
| 2023-09-17T15:39:26Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-09-17T15:39:20Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: guydebruyn/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
pensuke/xlm-roberta-base-finetuned-panx-de
|
pensuke
| 2023-09-17T15:39:13Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-17T11:21:31Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.838776250104582
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1927
- F1: 0.8388
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.338 | 1.0 | 525 | 0.2224 | 0.7987 |
| 0.1756 | 2.0 | 1050 | 0.1949 | 0.8280 |
| 0.1131 | 3.0 | 1575 | 0.1927 | 0.8388 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.14.0
|
BolaKubuz/distilhubert-finetuned-gtzan
|
BolaKubuz
| 2023-09-17T15:38:16Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-09-09T14:28:37Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.83
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6334
- Accuracy: 0.83
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.172 | 0.99 | 56 | 2.0068 | 0.5 |
| 1.6062 | 2.0 | 113 | 1.4539 | 0.57 |
| 1.2326 | 2.99 | 169 | 1.1605 | 0.67 |
| 1.0537 | 4.0 | 226 | 1.0225 | 0.73 |
| 0.8398 | 4.99 | 282 | 0.8392 | 0.8 |
| 0.7322 | 6.0 | 339 | 0.8435 | 0.76 |
| 0.6144 | 6.99 | 395 | 0.7217 | 0.83 |
| 0.5545 | 8.0 | 452 | 0.6526 | 0.84 |
| 0.4077 | 8.99 | 508 | 0.6378 | 0.83 |
| 0.4029 | 9.91 | 560 | 0.6334 | 0.83 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
lloorree/mythxl-70b-gptq
|
lloorree
| 2023-09-17T15:30:21Z | 8 | 6 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"dataset:kaiokendev/SuperCOT-dataset",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-11T06:32:19Z |
---
license: cc-by-nc-sa-4.0
datasets:
- kaiokendev/SuperCOT-dataset
---
Quantized 70B recreation of [MythoMax](https://huggingface.co/Gryphe/MythoMax-L2-13b).
Differences:
- Includes a 70B recreation of SuperCOT as in the 1.2 version of Huginn
- Anywhere Airoboros is merged in, the 1.4.1 version was used instead of 2.X
Known limitation: it *strongly* prefers novel format in roleplay, and will revert to it over time regardless of context or conversation history.
License is strictly noncommercial, both to match that of its major dependency [Chronos 70B](https://huggingface.co/elinas/chronos-70b-v2) and in its own right.
## Prompt Format (Copied from the MythoMax page, not necessarily optimal)
This model primarily uses Alpaca formatting, so for optimal model performance, use:
```
<System prompt/Character Card>
### Instruction:
Your instruction or question here.
For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.
### Response:
```
|
wanzhenen/rudeus
|
wanzhenen
| 2023-09-17T15:11:11Z | 31 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-17T15:10:19Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
---
### Rudeus on Stable Diffusion via Dreambooth
#### model by wanzhenen
This your the Stable Diffusion model fine-tuned the Rudeus concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **<rudeus> anime man**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:




|
lpepino/encodecmae-small
|
lpepino
| 2023-09-17T15:09:10Z | 0 | 1 | null |
[
"arxiv:2309.07391",
"license:mit",
"region:us"
] | null | 2023-09-11T01:28:33Z |
---
license: mit
---
# Model description
This is EnCodecMAE, an audio feature extractor pretrained with masked language modelling to predict discrete targets generated by EnCodec, a neural audio codec.
For more details about the architecture and pretraining procedure, read the [paper](https://arxiv.org/abs/2309.07391).
# Usage
### 1) Clone the [EnCodecMAE library](https://github.com/habla-liaa/encodecmae):
```
git clone https://github.com/habla-liaa/encodecmae.git
```
### 2) Install it:
```
cd encodecmae
pip install -e .
```
### 3) Extract embeddings in Python:
``` python
from encodecmae import load_model
model = load_model('small', device='cuda:0')
features = model.extract_features_from_file('gsc/bed/00176480_nohash_0.wav')
```
|
lpepino/encodecmae-large-st
|
lpepino
| 2023-09-17T15:08:17Z | 0 | 1 | null |
[
"arxiv:2309.07391",
"license:mit",
"region:us"
] | null | 2023-09-11T01:39:07Z |
---
license: mit
---
# Model description
This is EnCodecMAE, an audio feature extractor pretrained with masked language modelling to predict discrete targets generated by EnCodec, a neural audio codec.
For more details about the architecture and pretraining procedure, read the [paper](https://arxiv.org/abs/2309.07391).
# Usage
### 1) Clone the [EnCodecMAE library](https://github.com/habla-liaa/encodecmae):
```
git clone https://github.com/habla-liaa/encodecmae.git
```
### 2) Install it:
```
cd encodecmae
pip install -e .
```
### 3) Extract embeddings in Python:
``` python
from encodecmae import load_model
model = load_model('large-st', device='cuda:0')
features = model.extract_features_from_file('gsc/bed/00176480_nohash_0.wav')
```
|
microsoft/prophetnet-large-uncased-squad-qg
|
microsoft
| 2023-09-17T15:07:14Z | 587 | 7 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"prophetnet",
"text2text-generation",
"en",
"dataset:squad",
"arxiv:2001.04063",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- squad
---
##
prophetnet-large-uncased-squad-qg
Fine-tuned weights(converted from [original fairseq version repo](https://github.com/microsoft/ProphetNet)) for [ProphetNet](https://arxiv.org/abs/2001.04063) on question generation
SQuAD 1.1.
ProphetNet is a new pre-trained language model for sequence-to-sequence learning with a novel self-supervised objective called future n-gram prediction.
ProphetNet is able to predict more future tokens with a n-stream decoder. The original implementation is Fairseq version at [github repo](https://github.com/microsoft/ProphetNet).
### Usage
```
from transformers import ProphetNetTokenizer, ProphetNetForConditionalGeneration, ProphetNetConfig
model = ProphetNetForConditionalGeneration.from_pretrained('microsoft/prophetnet-large-uncased-squad-qg')
tokenizer = ProphetNetTokenizer.from_pretrained('microsoft/prophetnet-large-uncased-squad-qg')
FACT_TO_GENERATE_QUESTION_FROM = ""Bill Gates [SEP] Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975."
inputs = tokenizer([FACT_TO_GENERATE_QUESTION_FROM], return_tensors='pt')
# Generate Summary
question_ids = model.generate(inputs['input_ids'], num_beams=5, early_stopping=True)
tokenizer.batch_decode(question_ids, skip_special_tokens=True)
# should give: 'along with paul allen, who founded microsoft?'
```
### Citation
```bibtex
@article{yan2020prophetnet,
title={Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training},
author={Yan, Yu and Qi, Weizhen and Gong, Yeyun and Liu, Dayiheng and Duan, Nan and Chen, Jiusheng and Zhang, Ruofei and Zhou, Ming},
journal={arXiv preprint arXiv:2001.04063},
year={2020}
}
```
|
sd-dreambooth-library/snapp-g-data
|
sd-dreambooth-library
| 2023-09-17T15:05:38Z | 31 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-17T15:02:27Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
---
### snapp_g_data on Stable Diffusion via Dreambooth
#### model by hosnasn
This your the Stable Diffusion model fine-tuned the snapp_g_data concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **m_concept**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:




|
dini-r-a/emotion_classification
|
dini-r-a
| 2023-09-17T15:01:58Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-14T05:43:05Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: emotion_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: FastJobs--Visual_Emotional_Analysis
split: train[:-1]
args: FastJobs--Visual_Emotional_Analysis
metrics:
- name: Accuracy
type: accuracy
value: 0.5625
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6256
- Accuracy: 0.5625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 10 | 1.7794 | 0.4875 |
| No log | 2.0 | 20 | 1.6813 | 0.4938 |
| 0.2276 | 3.0 | 30 | 1.7602 | 0.4875 |
| 0.2276 | 4.0 | 40 | 1.9172 | 0.4562 |
| 0.2048 | 5.0 | 50 | 1.9316 | 0.4625 |
| 0.2048 | 6.0 | 60 | 1.8285 | 0.5 |
| 0.2048 | 7.0 | 70 | 1.6341 | 0.5687 |
| 0.1617 | 8.0 | 80 | 1.7461 | 0.5375 |
| 0.1617 | 9.0 | 90 | 1.6544 | 0.5312 |
| 0.1766 | 10.0 | 100 | 1.9449 | 0.4875 |
| 0.1766 | 11.0 | 110 | 1.7565 | 0.5125 |
| 0.1766 | 12.0 | 120 | 1.8936 | 0.5 |
| 0.1979 | 13.0 | 130 | 1.6812 | 0.5687 |
| 0.1979 | 14.0 | 140 | 1.7619 | 0.5188 |
| 0.1694 | 15.0 | 150 | 1.6903 | 0.55 |
### Framework versions
- Transformers 4.33.1
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
TinyLlama/TinyLlama-1.1B-Chat-v0.2
|
TinyLlama
| 2023-09-17T15:00:54Z | 65 | 13 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-17T04:45:53Z |
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- OpenAssistant/oasst_top1_2023-08-25
language:
- en
---
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
<div align="center">
<img src="./TinyLlama_logo.png" width="300"/>
</div>
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Model
This is the chat model finetuned on [PY007/TinyLlama-1.1B-intermediate-step-240k-503b](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-240k-503b). The dataset used is [OpenAssistant/oasst_top1_2023-08-25](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25).
**Update from V0.1: 1. Different dataset. 2. Different chat format (now [chatml](https://github.com/openai/openai-python/blob/main/chatml.md) formatted conversations).**
#### How to use
You will need the transformers>=4.31
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
```
from transformers import AutoTokenizer
import transformers
import torch
model = "PY007/TinyLlama-1.1B-Chat-v0.2"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
prompt = "How to get in a good university?"
formatted_prompt = (
f"<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
)
sequences = pipeline(
formatted_prompt,
do_sample=True,
top_k=50,
top_p = 0.9,
num_return_sequences=1,
repetition_penalty=1.1,
max_new_tokens=1024,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
|
nagupv/Stable13B_contextLLMExam_18kv2_f0
|
nagupv
| 2023-09-17T14:57:15Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-19T13:36:10Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
NewstaR/Porpoise-6b-instruct
|
NewstaR
| 2023-09-17T14:55:11Z | 2,581 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"custom_code",
"dataset:Open-Orca/OpenOrca",
"dataset:cerebras/SlimPajama-627B",
"dataset:ehartford/dolphin",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-17T14:27:51Z |
---
datasets:
- Open-Orca/OpenOrca
- cerebras/SlimPajama-627B
- ehartford/dolphin
---
This model is a finetuned version of the DeciLM-6b-instruct on the Dolphin GPT4 Dataset
Please set naive_attention_prefill to true when loading this model.
**Example:**
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, AutoTokenizer
model_name = "NewstaR/Porpoise-6b-instruct"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
trust_remote_code=True,
naive_attention_prefill=True,
)
model.config.use_cache = False
```
|
nielsr/van-base-finetuned-eurosat-imgaug
|
nielsr
| 2023-09-17T14:46:28Z | 208 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"van",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"base_model:Visual-Attention-Network/van-base",
"base_model:finetune:Visual-Attention-Network/van-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-04-11T12:46:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
base_model: Visual-Attention-Network/van-base
model-index:
- name: van-base-finetuned-eurosat-imgaug
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- type: accuracy
value: 0.9885185185185185
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# van-base-finetuned-eurosat-imgaug
This model is a fine-tuned version of [Visual-Attention-Network/van-base](https://huggingface.co/Visual-Attention-Network/van-base) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0379
- Accuracy: 0.9885
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0887 | 1.0 | 190 | 0.0589 | 0.98 |
| 0.055 | 2.0 | 380 | 0.0390 | 0.9878 |
| 0.0223 | 3.0 | 570 | 0.0379 | 0.9885 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Gladiator/funnel-transformer-xlarge_ner_wikiann
|
Gladiator
| 2023-09-17T14:42:45Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"funnel",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-12-09T16:19:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: funnel-transformer-xlarge_ner_wikiann
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
args: en
metrics:
- name: Precision
type: precision
value: 0.8522084990579862
- name: Recall
type: recall
value: 0.8633535981903011
- name: F1
type: f1
value: 0.8577448467184043
- name: Accuracy
type: accuracy
value: 0.935805105791199
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# funnel-transformer-xlarge_ner_wikiann
This model is a fine-tuned version of [funnel-transformer/xlarge](https://huggingface.co/funnel-transformer/xlarge) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4023
- Precision: 0.8522
- Recall: 0.8634
- F1: 0.8577
- Accuracy: 0.9358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3193 | 1.0 | 5000 | 0.3116 | 0.8239 | 0.8296 | 0.8267 | 0.9260 |
| 0.2836 | 2.0 | 10000 | 0.2846 | 0.8446 | 0.8498 | 0.8472 | 0.9325 |
| 0.2237 | 3.0 | 15000 | 0.3258 | 0.8427 | 0.8542 | 0.8484 | 0.9332 |
| 0.1303 | 4.0 | 20000 | 0.3801 | 0.8531 | 0.8634 | 0.8582 | 0.9362 |
| 0.0867 | 5.0 | 25000 | 0.4023 | 0.8522 | 0.8634 | 0.8577 | 0.9358 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Serotina/rl_course_vizdoom_health_gathering_supreme
|
Serotina
| 2023-09-17T14:35:43Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-17T14:35:32Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.35 +/- 6.07
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Serotina/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
ys7yoo/nli_sts_roberta_large_lr1e_05_wd1e_03_ep5_lr1e-05_wd1e-03_ep5_ckpt
|
ys7yoo
| 2023-09-17T14:26:43Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"base_model:ys7yoo/sts_roberta-large_lr1e-05_wd1e-03_ep5",
"base_model:finetune:ys7yoo/sts_roberta-large_lr1e-05_wd1e-03_ep5",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-17T13:54:14Z |
---
base_model: ys7yoo/sts_roberta_large_lr1e-05_wd1e-03_ep5
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- accuracy
- f1
model-index:
- name: nli_sts_roberta_large_lr1e_05_wd1e_03_ep5_lr1e-05_wd1e-03_ep5_ckpt
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
config: nli
split: validation
args: nli
metrics:
- name: Accuracy
type: accuracy
value: 0.8986666666666666
- name: F1
type: f1
value: 0.8985280502079203
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nli_sts_roberta_large_lr1e_05_wd1e_03_ep5_lr1e-05_wd1e-03_ep5_ckpt
This model is a fine-tuned version of [ys7yoo/sts_roberta_large_lr1e-05_wd1e-03_ep5](https://huggingface.co/ys7yoo/sts_roberta_large_lr1e-05_wd1e-03_ep5) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4971
- Accuracy: 0.8987
- F1: 0.8985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5471 | 1.0 | 391 | 0.3522 | 0.876 | 0.8756 |
| 0.2379 | 2.0 | 782 | 0.3345 | 0.8983 | 0.8981 |
| 0.1215 | 3.0 | 1173 | 0.3708 | 0.8997 | 0.8995 |
| 0.0661 | 4.0 | 1564 | 0.4734 | 0.896 | 0.8958 |
| 0.0407 | 5.0 | 1955 | 0.4971 | 0.8987 | 0.8985 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
CyberHarem/satou_shin_idolmastercinderellagirls
|
CyberHarem
| 2023-09-17T14:25:10Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/satou_shin_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-17T14:02:17Z |
---
license: mit
datasets:
- CyberHarem/satou_shin_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of satou_shin_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 3240, you need to download `3240/satou_shin_idolmastercinderellagirls.pt` as the embedding and `3240/satou_shin_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 3240**, with the score of 0.946. The trigger words are:
1. `satou_shin_idolmastercinderellagirls`
2. `green_eyes, ahoge, blush, smile, bangs, long_hair, breasts, twintails, heart, blonde_hair, hair_ornament`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:--------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-----------------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 8100 | 0.904 | [Download](8100/satou_shin_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8100/previews/pattern_12.png) |  |  |  |  |  | [<NSFW, click to see>](8100/previews/bondage.png) | [<NSFW, click to see>](8100/previews/free.png) |  |  | [<NSFW, click to see>](8100/previews/nude.png) | [<NSFW, click to see>](8100/previews/nude2.png) |  |  |
| 7560 | 0.860 | [Download](7560/satou_shin_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7560/previews/pattern_12.png) |  |  |  |  |  | [<NSFW, click to see>](7560/previews/bondage.png) | [<NSFW, click to see>](7560/previews/free.png) |  |  | [<NSFW, click to see>](7560/previews/nude.png) | [<NSFW, click to see>](7560/previews/nude2.png) |  |  |
| 7020 | 0.937 | [Download](7020/satou_shin_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7020/previews/pattern_12.png) |  |  |  |  |  | [<NSFW, click to see>](7020/previews/bondage.png) | [<NSFW, click to see>](7020/previews/free.png) |  |  | [<NSFW, click to see>](7020/previews/nude.png) | [<NSFW, click to see>](7020/previews/nude2.png) |  |  |
| 6480 | 0.935 | [Download](6480/satou_shin_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6480/previews/pattern_12.png) |  |  |  |  |  | [<NSFW, click to see>](6480/previews/bondage.png) | [<NSFW, click to see>](6480/previews/free.png) |  |  | [<NSFW, click to see>](6480/previews/nude.png) | [<NSFW, click to see>](6480/previews/nude2.png) |  |  |
| 5940 | 0.917 | [Download](5940/satou_shin_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5940/previews/pattern_12.png) |  |  |  |  |  | [<NSFW, click to see>](5940/previews/bondage.png) | [<NSFW, click to see>](5940/previews/free.png) |  |  | [<NSFW, click to see>](5940/previews/nude.png) | [<NSFW, click to see>](5940/previews/nude2.png) |  |  |
| 5400 | 0.934 | [Download](5400/satou_shin_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5400/previews/pattern_12.png) |  |  |  |  |  | [<NSFW, click to see>](5400/previews/bondage.png) | [<NSFW, click to see>](5400/previews/free.png) |  |  | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) |  |  |
| 4860 | 0.900 | [Download](4860/satou_shin_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4860/previews/pattern_12.png) |  |  |  |  |  | [<NSFW, click to see>](4860/previews/bondage.png) | [<NSFW, click to see>](4860/previews/free.png) |  |  | [<NSFW, click to see>](4860/previews/nude.png) | [<NSFW, click to see>](4860/previews/nude2.png) |  |  |
| 4320 | 0.935 | [Download](4320/satou_shin_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4320/previews/pattern_12.png) |  |  |  |  |  | [<NSFW, click to see>](4320/previews/bondage.png) | [<NSFW, click to see>](4320/previews/free.png) |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3780 | 0.907 | [Download](3780/satou_shin_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3780/previews/pattern_12.png) |  |  |  |  |  | [<NSFW, click to see>](3780/previews/bondage.png) | [<NSFW, click to see>](3780/previews/free.png) |  |  | [<NSFW, click to see>](3780/previews/nude.png) | [<NSFW, click to see>](3780/previews/nude2.png) |  |  |
| **3240** | **0.946** | [**Download**](3240/satou_shin_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3240/previews/pattern_12.png) |  |  |  |  |  | [<NSFW, click to see>](3240/previews/bondage.png) | [<NSFW, click to see>](3240/previews/free.png) |  |  | [<NSFW, click to see>](3240/previews/nude.png) | [<NSFW, click to see>](3240/previews/nude2.png) |  |  |
| 2700 | 0.915 | [Download](2700/satou_shin_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2700/previews/pattern_12.png) |  |  |  |  |  | [<NSFW, click to see>](2700/previews/bondage.png) | [<NSFW, click to see>](2700/previews/free.png) |  |  | [<NSFW, click to see>](2700/previews/nude.png) | [<NSFW, click to see>](2700/previews/nude2.png) |  |  |
| 2160 | 0.909 | [Download](2160/satou_shin_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2160/previews/pattern_12.png) |  |  |  |  |  | [<NSFW, click to see>](2160/previews/bondage.png) | [<NSFW, click to see>](2160/previews/free.png) |  |  | [<NSFW, click to see>](2160/previews/nude.png) | [<NSFW, click to see>](2160/previews/nude2.png) |  |  |
| 1620 | 0.896 | [Download](1620/satou_shin_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1620/previews/pattern_12.png) |  |  |  |  |  | [<NSFW, click to see>](1620/previews/bondage.png) | [<NSFW, click to see>](1620/previews/free.png) |  |  | [<NSFW, click to see>](1620/previews/nude.png) | [<NSFW, click to see>](1620/previews/nude2.png) |  |  |
| 1080 | 0.842 | [Download](1080/satou_shin_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1080/previews/pattern_12.png) |  |  |  |  |  | [<NSFW, click to see>](1080/previews/bondage.png) | [<NSFW, click to see>](1080/previews/free.png) |  |  | [<NSFW, click to see>](1080/previews/nude.png) | [<NSFW, click to see>](1080/previews/nude2.png) |  |  |
| 540 | 0.828 | [Download](540/satou_shin_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](540/previews/pattern_12.png) |  |  |  |  |  | [<NSFW, click to see>](540/previews/bondage.png) | [<NSFW, click to see>](540/previews/free.png) |  |  | [<NSFW, click to see>](540/previews/nude.png) | [<NSFW, click to see>](540/previews/nude2.png) |  |  |
|
sd-dreambooth-library/s-g-data
|
sd-dreambooth-library
| 2023-09-17T14:15:05Z | 32 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-17T14:14:17Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
---
### s_g_data on Stable Diffusion via Dreambooth
#### model by hosnasn
This your the Stable Diffusion model fine-tuned the s_g_data concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **<cat-toy> toy**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:




|
salim4n/Reinforce-Cartpole-v1
|
salim4n
| 2023-09-17T14:14:39Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-17T14:02:59Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
deepachalapathi/parasci_1
|
deepachalapathi
| 2023-09-17T14:12:42Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-09-17T08:54:53Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# whateverweird17/parasci_1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("whateverweird17/parasci_1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Ojas-CoderAI/Reinforce-model
|
Ojas-CoderAI
| 2023-09-17T14:06:11Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-17T13:32:04Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-model
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ys7yoo/sts_nli_klue_roberta_large_lr1e_05_wd1e_03_lr1e-05_wd1e-03_ep5_ckpt
|
ys7yoo
| 2023-09-17T13:38:06Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"base_model:ys7yoo/nli_klue_roberta-large_lr1e-05_wd1e-03",
"base_model:finetune:ys7yoo/nli_klue_roberta-large_lr1e-05_wd1e-03",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-17T13:13:11Z |
---
base_model: ys7yoo/nli_klue_roberta_large_lr1e-05_wd1e-03
tags:
- generated_from_trainer
datasets:
- klue
model-index:
- name: sts_nli_klue_roberta_large_lr1e_05_wd1e_03_lr1e-05_wd1e-03_ep5_ckpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sts_nli_klue_roberta_large_lr1e_05_wd1e_03_lr1e-05_wd1e-03_ep5_ckpt
This model is a fine-tuned version of [ys7yoo/nli_klue_roberta_large_lr1e-05_wd1e-03](https://huggingface.co/ys7yoo/nli_klue_roberta_large_lr1e-05_wd1e-03) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3169
- Mse: 0.3169
- Mae: 0.4090
- R2: 0.8549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 1.1893 | 1.0 | 183 | 0.5904 | 0.5904 | 0.5827 | 0.7296 |
| 0.1418 | 2.0 | 366 | 0.4095 | 0.4095 | 0.4737 | 0.8125 |
| 0.0967 | 3.0 | 549 | 0.3657 | 0.3657 | 0.4383 | 0.8326 |
| 0.0752 | 4.0 | 732 | 0.3391 | 0.3391 | 0.4254 | 0.8447 |
| 0.06 | 5.0 | 915 | 0.3169 | 0.3169 | 0.4090 | 0.8549 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
reluvie/RVC_v2_Red_Velvet
|
reluvie
| 2023-09-17T13:14:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-17T11:32:01Z |
please credit when using!! (@_reluvie on YouTube, TikTok and Discord)
SEULGI - mangio-crepe, 500 epochs, 35.5k steps
WENDY - rmvpe, 300 epochs, 25.5k steps
|
Yntec/CitrineDreamMix
|
Yntec
| 2023-09-17T12:42:27Z | 383 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"anime",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-17T11:33:13Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- anime
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# Critine Dream Mix
Original page: https://civitai.com/models/18116?modelVersionId=21839
Samples and prompt:


Anime fine details portrait of joyful cute little girl sleep school class room, bokeh. anime masterpiece by studio ghibli. 8k, sharp high quality classic anime from 1990 in style of hayao miyazaki. Wikipedia. hugging. OIL PAINTING. DOCTOR with short hair in coat BEAUTIFUL girl eyes. she has pigtails
|
Flifenstein/bloomz-560m_PROMPT_TUNING_CAUSAL_LM
|
Flifenstein
| 2023-09-17T12:30:02Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-17T10:47:51Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
Shishir1807/M10_llama
|
Shishir1807
| 2023-09-17T12:13:27Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-09-17T12:11:52Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.29.2
pip install einops==0.6.1
pip install accelerate==0.19.0
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="Shishir1807/M10_llama",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?</s><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"Shishir1807/M10_llama",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"Shishir1807/M10_llama",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Shishir1807/M10_llama" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?</s><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=4096, bias=False)
(v_proj): Linear(in_features=4096, out_features=4096, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=4096, out_features=11008, bias=False)
(down_proj): Linear(in_features=11008, out_features=4096, bias=False)
(up_proj): Linear(in_features=4096, out_features=11008, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
sd-dreambooth-library/s-grocery-data
|
sd-dreambooth-library
| 2023-09-17T12:02:01Z | 34 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-17T12:00:47Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
---
### s_grocery_data on Stable Diffusion via Dreambooth
#### model by hosnasn
This your the Stable Diffusion model fine-tuned the s_grocery_data concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **<cat-toy> toy**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:










|
sparasdya/image_classification
|
sparasdya
| 2023-09-17T11:48:58Z | 19 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-17T10:08:40Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.55
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1552
- Accuracy: 0.55
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.6906 | 0.3375 |
| No log | 2.0 | 80 | 1.4310 | 0.4062 |
| No log | 3.0 | 120 | 1.3517 | 0.4875 |
| No log | 4.0 | 160 | 1.2080 | 0.5437 |
| No log | 5.0 | 200 | 1.1920 | 0.5437 |
| No log | 6.0 | 240 | 1.1123 | 0.575 |
| No log | 7.0 | 280 | 1.1533 | 0.575 |
| No log | 8.0 | 320 | 1.0971 | 0.5813 |
| No log | 9.0 | 360 | 1.1635 | 0.5687 |
| No log | 10.0 | 400 | 1.1344 | 0.5875 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
cloudwalkerw/wavlm-base_2
|
cloudwalkerw
| 2023-09-17T11:40:53Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wavlm",
"audio-classification",
"generated_from_trainer",
"base_model:microsoft/wavlm-base",
"base_model:finetune:microsoft/wavlm-base",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-09-15T16:57:01Z |
---
base_model: microsoft/wavlm-base
tags:
- audio-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wavlm-base_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavlm-base_2
This model is a fine-tuned version of [microsoft/wavlm-base](https://huggingface.co/microsoft/wavlm-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0244
- Accuracy: 0.9966
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 2
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4872 | 0.25 | 100 | 0.2180 | 0.8974 |
| 0.1571 | 0.5 | 200 | 0.2582 | 0.9334 |
| 0.0644 | 0.76 | 300 | 0.0244 | 0.9966 |
| 0.0553 | 1.01 | 400 | 0.1156 | 0.9928 |
| 0.1108 | 1.26 | 500 | 0.1576 | 0.9898 |
| 0.0849 | 1.51 | 600 | 0.0871 | 0.9947 |
| 0.0635 | 1.76 | 700 | 0.1088 | 0.9939 |
| 0.0504 | 2.02 | 800 | 0.4074 | 0.9790 |
| 0.1075 | 2.27 | 900 | 0.2955 | 0.9814 |
| 0.2387 | 2.52 | 1000 | 0.0651 | 0.9956 |
| 0.3052 | 2.77 | 1100 | 0.2379 | 0.8974 |
| 0.3336 | 3.02 | 1200 | 0.3527 | 0.8974 |
| 0.3322 | 3.28 | 1300 | 0.3307 | 0.8974 |
| 0.3201 | 3.53 | 1400 | 0.3405 | 0.8974 |
| 0.3406 | 3.78 | 1500 | 0.3335 | 0.8974 |
| 0.3475 | 4.03 | 1600 | 0.3341 | 0.8974 |
| 0.3312 | 4.28 | 1700 | 0.3361 | 0.8974 |
| 0.3367 | 4.54 | 1800 | 0.3310 | 0.8974 |
| 0.3284 | 4.79 | 1900 | 0.3339 | 0.8974 |
| 0.3267 | 5.04 | 2000 | 0.3350 | 0.8974 |
| 0.338 | 5.29 | 2100 | 0.3308 | 0.8974 |
| 0.3277 | 5.55 | 2200 | 0.3309 | 0.8974 |
| 0.3294 | 5.8 | 2300 | 0.3313 | 0.8974 |
| 0.3315 | 6.05 | 2400 | 0.3360 | 0.8974 |
| 0.3397 | 6.3 | 2500 | 0.3307 | 0.8974 |
| 0.3318 | 6.55 | 2600 | 0.3359 | 0.8974 |
| 0.3312 | 6.81 | 2700 | 0.3308 | 0.8974 |
| 0.3155 | 7.06 | 2800 | 0.3317 | 0.8974 |
| 0.3304 | 7.31 | 2900 | 0.3362 | 0.8974 |
| 0.338 | 7.56 | 3000 | 0.3342 | 0.8974 |
| 0.3241 | 7.81 | 3100 | 0.3310 | 0.8974 |
| 0.3325 | 8.07 | 3200 | 0.3326 | 0.8974 |
| 0.3202 | 8.32 | 3300 | 0.3345 | 0.8974 |
| 0.3315 | 8.57 | 3400 | 0.3335 | 0.8974 |
| 0.3288 | 8.82 | 3500 | 0.3312 | 0.8974 |
| 0.3371 | 9.07 | 3600 | 0.3401 | 0.8974 |
| 0.3409 | 9.33 | 3700 | 0.3330 | 0.8974 |
| 0.3236 | 9.58 | 3800 | 0.3330 | 0.8974 |
| 0.3224 | 9.83 | 3900 | 0.3321 | 0.8974 |
| 0.3439 | 10.08 | 4000 | 0.3326 | 0.8974 |
| 0.3382 | 10.33 | 4100 | 0.3310 | 0.8974 |
| 0.3307 | 10.59 | 4200 | 0.3382 | 0.8974 |
| 0.3231 | 10.84 | 4300 | 0.3325 | 0.8974 |
| 0.3095 | 11.09 | 4400 | 0.3348 | 0.8974 |
| 0.3442 | 11.34 | 4500 | 0.3327 | 0.8974 |
| 0.3269 | 11.59 | 4600 | 0.3326 | 0.8974 |
| 0.3323 | 11.85 | 4700 | 0.3308 | 0.8974 |
| 0.3313 | 12.1 | 4800 | 0.3308 | 0.8974 |
| 0.3283 | 12.35 | 4900 | 0.3314 | 0.8974 |
| 0.3331 | 12.6 | 5000 | 0.3307 | 0.8974 |
| 0.3317 | 12.85 | 5100 | 0.3344 | 0.8974 |
| 0.3283 | 13.11 | 5200 | 0.3320 | 0.8974 |
| 0.3263 | 13.36 | 5300 | 0.3311 | 0.8974 |
| 0.3421 | 13.61 | 5400 | 0.3307 | 0.8974 |
| 0.3164 | 13.86 | 5500 | 0.3318 | 0.8974 |
| 0.3315 | 14.11 | 5600 | 0.3335 | 0.8974 |
| 0.3415 | 14.37 | 5700 | 0.3315 | 0.8974 |
| 0.3325 | 14.62 | 5800 | 0.3307 | 0.8974 |
| 0.3264 | 14.87 | 5900 | 0.3330 | 0.8974 |
| 0.3223 | 15.12 | 6000 | 0.3307 | 0.8974 |
| 0.3289 | 15.37 | 6100 | 0.3329 | 0.8974 |
| 0.3353 | 15.63 | 6200 | 0.3311 | 0.8974 |
| 0.3246 | 15.88 | 6300 | 0.3311 | 0.8974 |
| 0.3425 | 16.13 | 6400 | 0.3307 | 0.8974 |
| 0.331 | 16.38 | 6500 | 0.3307 | 0.8974 |
| 0.3293 | 16.64 | 6600 | 0.3353 | 0.8974 |
| 0.3249 | 16.89 | 6700 | 0.3339 | 0.8974 |
| 0.3214 | 17.14 | 6800 | 0.3338 | 0.8974 |
| 0.3259 | 17.39 | 6900 | 0.3327 | 0.8974 |
| 0.3408 | 17.64 | 7000 | 0.3318 | 0.8974 |
| 0.3258 | 17.9 | 7100 | 0.3318 | 0.8974 |
| 0.3299 | 18.15 | 7200 | 0.3308 | 0.8974 |
| 0.327 | 18.4 | 7300 | 0.3371 | 0.8974 |
| 0.3317 | 18.65 | 7400 | 0.3308 | 0.8974 |
| 0.3291 | 18.9 | 7500 | 0.3310 | 0.8974 |
| 0.3263 | 19.16 | 7600 | 0.3325 | 0.8974 |
| 0.3223 | 19.41 | 7700 | 0.3346 | 0.8974 |
| 0.3403 | 19.66 | 7800 | 0.3316 | 0.8974 |
| 0.3265 | 19.91 | 7900 | 0.3309 | 0.8974 |
| 0.33 | 20.16 | 8000 | 0.3318 | 0.8974 |
| 0.3488 | 20.42 | 8100 | 0.3313 | 0.8974 |
| 0.3293 | 20.67 | 8200 | 0.3335 | 0.8974 |
| 0.3095 | 20.92 | 8300 | 0.3356 | 0.8974 |
| 0.3366 | 21.17 | 8400 | 0.3332 | 0.8974 |
| 0.317 | 21.42 | 8500 | 0.3338 | 0.8974 |
| 0.3299 | 21.68 | 8600 | 0.3308 | 0.8974 |
| 0.3434 | 21.93 | 8700 | 0.3310 | 0.8974 |
| 0.3208 | 22.18 | 8800 | 0.3309 | 0.8974 |
| 0.3351 | 22.43 | 8900 | 0.3324 | 0.8974 |
| 0.3301 | 22.68 | 9000 | 0.3308 | 0.8974 |
| 0.3196 | 22.94 | 9100 | 0.3330 | 0.8974 |
| 0.3339 | 23.19 | 9200 | 0.3333 | 0.8974 |
| 0.3249 | 23.44 | 9300 | 0.3308 | 0.8974 |
| 0.3247 | 23.69 | 9400 | 0.3338 | 0.8974 |
| 0.3369 | 23.94 | 9500 | 0.3313 | 0.8974 |
| 0.3291 | 24.2 | 9600 | 0.3320 | 0.8974 |
| 0.3307 | 24.45 | 9700 | 0.3309 | 0.8974 |
| 0.3328 | 24.7 | 9800 | 0.3307 | 0.8974 |
| 0.3277 | 24.95 | 9900 | 0.3342 | 0.8974 |
| 0.3278 | 25.2 | 10000 | 0.3310 | 0.8974 |
| 0.3197 | 25.46 | 10100 | 0.3349 | 0.8974 |
| 0.3273 | 25.71 | 10200 | 0.3321 | 0.8974 |
| 0.3345 | 25.96 | 10300 | 0.3312 | 0.8974 |
| 0.3351 | 26.21 | 10400 | 0.3325 | 0.8974 |
| 0.3144 | 26.47 | 10500 | 0.3346 | 0.8974 |
| 0.3361 | 26.72 | 10600 | 0.3311 | 0.8974 |
| 0.3334 | 26.97 | 10700 | 0.3307 | 0.8974 |
| 0.3287 | 27.22 | 10800 | 0.3373 | 0.8974 |
| 0.3374 | 27.47 | 10900 | 0.3307 | 0.8974 |
| 0.3302 | 27.73 | 11000 | 0.3307 | 0.8974 |
| 0.3245 | 27.98 | 11100 | 0.3315 | 0.8974 |
| 0.3353 | 28.23 | 11200 | 0.3335 | 0.8974 |
| 0.3191 | 28.48 | 11300 | 0.3336 | 0.8974 |
| 0.3226 | 28.73 | 11400 | 0.3308 | 0.8974 |
| 0.3384 | 28.99 | 11500 | 0.3322 | 0.8974 |
| 0.3368 | 29.24 | 11600 | 0.3337 | 0.8974 |
| 0.3224 | 29.49 | 11700 | 0.3332 | 0.8974 |
| 0.3224 | 29.74 | 11800 | 0.3318 | 0.8974 |
| 0.3363 | 29.99 | 11900 | 0.3310 | 0.8974 |
| 0.327 | 30.25 | 12000 | 0.3307 | 0.8974 |
| 0.3291 | 30.5 | 12100 | 0.3307 | 0.8974 |
| 0.3369 | 30.75 | 12200 | 0.3322 | 0.8974 |
| 0.3211 | 31.0 | 12300 | 0.3329 | 0.8974 |
| 0.329 | 31.25 | 12400 | 0.3321 | 0.8974 |
| 0.3206 | 31.51 | 12500 | 0.3309 | 0.8974 |
| 0.3339 | 31.76 | 12600 | 0.3332 | 0.8974 |
| 0.3323 | 32.01 | 12700 | 0.3316 | 0.8974 |
| 0.3273 | 32.26 | 12800 | 0.3323 | 0.8974 |
| 0.3362 | 32.51 | 12900 | 0.3307 | 0.8974 |
| 0.3387 | 32.77 | 13000 | 0.3309 | 0.8974 |
| 0.3173 | 33.02 | 13100 | 0.3311 | 0.8974 |
| 0.3291 | 33.27 | 13200 | 0.3309 | 0.8974 |
| 0.3316 | 33.52 | 13300 | 0.3315 | 0.8974 |
| 0.3366 | 33.77 | 13400 | 0.3332 | 0.8974 |
| 0.3115 | 34.03 | 13500 | 0.3383 | 0.8974 |
| 0.3275 | 34.28 | 13600 | 0.3324 | 0.8974 |
| 0.3373 | 34.53 | 13700 | 0.3315 | 0.8974 |
| 0.3247 | 34.78 | 13800 | 0.3313 | 0.8974 |
| 0.3349 | 35.03 | 13900 | 0.3325 | 0.8974 |
| 0.3223 | 35.29 | 14000 | 0.3312 | 0.8974 |
| 0.3321 | 35.54 | 14100 | 0.3308 | 0.8974 |
| 0.3304 | 35.79 | 14200 | 0.3316 | 0.8974 |
| 0.3262 | 36.04 | 14300 | 0.3320 | 0.8974 |
| 0.3239 | 36.29 | 14400 | 0.3317 | 0.8974 |
| 0.3325 | 36.55 | 14500 | 0.3308 | 0.8974 |
| 0.325 | 36.8 | 14600 | 0.3316 | 0.8974 |
| 0.3416 | 37.05 | 14700 | 0.3311 | 0.8974 |
| 0.3226 | 37.3 | 14800 | 0.3309 | 0.8974 |
| 0.3286 | 37.56 | 14900 | 0.3307 | 0.8974 |
| 0.3284 | 37.81 | 15000 | 0.3312 | 0.8974 |
| 0.3298 | 38.06 | 15100 | 0.3326 | 0.8974 |
| 0.3383 | 38.31 | 15200 | 0.3311 | 0.8974 |
| 0.3418 | 38.56 | 15300 | 0.3308 | 0.8974 |
| 0.3123 | 38.82 | 15400 | 0.3311 | 0.8974 |
| 0.3237 | 39.07 | 15500 | 0.3346 | 0.8974 |
| 0.3261 | 39.32 | 15600 | 0.3325 | 0.8974 |
| 0.3269 | 39.57 | 15700 | 0.3312 | 0.8974 |
| 0.3267 | 39.82 | 15800 | 0.3319 | 0.8974 |
| 0.3381 | 40.08 | 15900 | 0.3327 | 0.8974 |
| 0.3238 | 40.33 | 16000 | 0.3326 | 0.8974 |
| 0.3299 | 40.58 | 16100 | 0.3320 | 0.8974 |
| 0.3385 | 40.83 | 16200 | 0.3309 | 0.8974 |
| 0.3268 | 41.08 | 16300 | 0.3322 | 0.8974 |
| 0.3253 | 41.34 | 16400 | 0.3320 | 0.8974 |
| 0.3261 | 41.59 | 16500 | 0.3314 | 0.8974 |
| 0.3362 | 41.84 | 16600 | 0.3324 | 0.8974 |
| 0.3203 | 42.09 | 16700 | 0.3326 | 0.8974 |
| 0.325 | 42.34 | 16800 | 0.3323 | 0.8974 |
| 0.3172 | 42.6 | 16900 | 0.3326 | 0.8974 |
| 0.3361 | 42.85 | 17000 | 0.3308 | 0.8974 |
| 0.3432 | 43.1 | 17100 | 0.3310 | 0.8974 |
| 0.3396 | 43.35 | 17200 | 0.3313 | 0.8974 |
| 0.3163 | 43.6 | 17300 | 0.3328 | 0.8974 |
| 0.3353 | 43.86 | 17400 | 0.3318 | 0.8974 |
| 0.3299 | 44.11 | 17500 | 0.3317 | 0.8974 |
| 0.3213 | 44.36 | 17600 | 0.3319 | 0.8974 |
| 0.3253 | 44.61 | 17700 | 0.3329 | 0.8974 |
| 0.3391 | 44.86 | 17800 | 0.3322 | 0.8974 |
| 0.3179 | 45.12 | 17900 | 0.3330 | 0.8974 |
| 0.3348 | 45.37 | 18000 | 0.3321 | 0.8974 |
| 0.3116 | 45.62 | 18100 | 0.3326 | 0.8974 |
| 0.3334 | 45.87 | 18200 | 0.3322 | 0.8974 |
| 0.3401 | 46.12 | 18300 | 0.3315 | 0.8974 |
| 0.3381 | 46.38 | 18400 | 0.3311 | 0.8974 |
| 0.3154 | 46.63 | 18500 | 0.3327 | 0.8974 |
| 0.3348 | 46.88 | 18600 | 0.3322 | 0.8974 |
| 0.3285 | 47.13 | 18700 | 0.3325 | 0.8974 |
| 0.3256 | 47.39 | 18800 | 0.3329 | 0.8974 |
| 0.3389 | 47.64 | 18900 | 0.3325 | 0.8974 |
| 0.3288 | 47.89 | 19000 | 0.3327 | 0.8974 |
| 0.3172 | 48.14 | 19100 | 0.3327 | 0.8974 |
| 0.3211 | 48.39 | 19200 | 0.3325 | 0.8974 |
| 0.3348 | 48.65 | 19300 | 0.3325 | 0.8974 |
| 0.3327 | 48.9 | 19400 | 0.3326 | 0.8974 |
| 0.3341 | 49.15 | 19500 | 0.3326 | 0.8974 |
| 0.3344 | 49.4 | 19600 | 0.3325 | 0.8974 |
| 0.3207 | 49.65 | 19700 | 0.3326 | 0.8974 |
| 0.3299 | 49.91 | 19800 | 0.3326 | 0.8974 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.0.post302
- Datasets 2.14.5
- Tokenizers 0.13.3
|
sean202302/ddpm-butterflies-32px
|
sean202302
| 2023-09-17T11:37:07Z | 3 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2023-09-17T11:16:48Z |
---
license: mit
tags:
- pytorch
- diffusers
---
#这个模型用于蝴蝶图像的无条件生成
'''python
from diffusers import DDPMPipeline
pipeline=DDPMPipeline.from_pretrained('sean202302/ddpm-butterflies-32px')
image=pipeline().images[0]
image
|
Bingsu/vitB32_bert_ko_small_clip
|
Bingsu
| 2023-09-17T11:36:42Z | 94 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vision-text-dual-encoder",
"feature-extraction",
"clip",
"ko",
"arxiv:2004.09813",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-06-08T00:44:39Z |
---
tags:
- clip
language: ko
license: mit
---
# vitB32_bert_ko_small_clip
[openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) + [lassl/bert-ko-small](https://huggingface.co/lassl/bert-ko-small) CLIP Model
[training code(github)](https://github.com/Bing-su/KoCLIP_training_code)
## Train
SBERT의 [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813)를 참고하여, `openai/clip-vit-base-patch32` 텍스트 모델의 가중치를 `lassl/bert-ko-small`로 복제하였습니다. 논문과는 달리 mean pooling을 사용하지 않고, huggingface모델의 기본 pooling을 그대로 사용하였습니다.
사용한 데이터: [Aihub 한국어-영어 번역(병렬) 말뭉치](https://aihub.or.kr/aidata/87)
## How to Use
#### 1.
```python
import requests
from PIL import Image
from transformers import VisionTextDualEncoderProcessor, VisionTextDualEncoderModel # or Auto...
model = VisionTextDualEncoderModel.from_pretrained("Bingsu/vitB32_bert_ko_small_clip")
processor = VisionTextDualEncoderProcessor.from_pretrained("Bingsu/vitB32_bert_ko_small_clip")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["고양이 두 마리", "개 두 마리"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
probs = logits_per_image.softmax(dim=1)
```
```pycon
>>> probs
tensor([[0.9756, 0.0244]], grad_fn=<SoftmaxBackward0>)
```
#### 2.
```python
from transformers import AutoModel, AutoProcessor, pipeline
model = AutoModel.from_pretrained("Bingsu/vitB32_bert_ko_small_clip")
processor = AutoProcessor.from_pretrained("Bingsu/vitB32_bert_ko_small_clip")
pipe = pipeline("zero-shot-image-classification", model=model, feature_extractor=processor.feature_extractor, tokenizer=processor.tokenizer)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
result = pipe(images=url, candidate_labels=["고양이 한 마리", "고양이 두 마리", "고양이 두 마리와 리모컨 두 개"], hypothesis_template="{}")
```
```pycon
>>> result
[{'score': 0.871887743473053, 'label': '고양이 두 마리와 리모컨 두 개'},
{'score': 0.12316706776618958, 'label': '고양이 두 마리'},
{'score': 0.004945191089063883, 'label': '고양이 한 마리'}]
```
|
Sanyam0605/q-FrozenLake-v1-4x4-noSlippery
|
Sanyam0605
| 2023-09-17T11:27:09Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-17T11:27:07Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Sanyam0605/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
MaxArb/SFStars
|
MaxArb
| 2023-09-17T10:54:59Z | 0 | 0 | null |
[
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2023-09-17T10:54:16Z |
---
license: cc-by-nc-nd-4.0
---
|
ayoubkirouane/Stable-Cats-Generator
|
ayoubkirouane
| 2023-09-17T10:44:11Z | 37 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-17T09:57:18Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
---
# Model Card: Stable-Cats-Generator
## Model Information
- **Model Name:** Stable-Cats-Generator
- **Model Version:** v1
- **Model Type:** Image Generation
- **Based on:** Stable Diffusion v2

## Model Description
Stable-Cats-Generator is an image generation model fine-tuned for generating white cat images based on text prompts.
It is built upon **Stable Diffusion v2** and utilizes a pretrained text encoder (OpenCLIP-ViT/H) for text-to-image generation.
**Stable Diffusion v2** is the latest version of the Stable Diffusion text-to-image diffusion model.
It was released in 2023 and is based on the same core principles as the original Stable Diffusion model, but it has a number of improvements
## Intended Use
- Safe content generation
- Artistic and creative processes
- Bias and limitation exploration
- Educational and creative tools
## Potential Use Cases
- Generating cat images for artistic purposes
- Investigating biases and limitations of generative models
- Creating safe and customizable content
- Enhancing educational or creative tools
## Model Capabilities
- High-quality white cat image generation
- Quick image generation, even on single GPUs
- Customizable for specific needs and datasets
## Limitations
- May not always produce realistic images
- Limited to generating white cat images based on text prompts
- Ethical considerations when using generated content
## Ethical Considerations
- Ensure generated content is safe and non-harmful
- Monitor and mitigate potential biases in generated content
- Respect copyright and licensing when using generated images
## Responsible AI
- Ongoing monitoring and evaluation of model outputs
- Regular updates to address limitations and improve safety
- Compliance with ethical guidelines and legal regulations
## Disclaimer
This model card serves as a documentation tool and does not constitute legal or ethical guidance. Users of the model are responsible for adhering to ethical and legal standards in their use of the model.
## Usage
```
pip install diffusers==0.11.1
pip install transformers scipy ftfy accelerate
```
```python
import torch
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("ayoubkirouane/Stable-Cats-Generator", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "A photo of a picture-perfect white cat."
image = pipe(prompt).images[0] # image here is in [PIL format](https://pillow.readthedocs.io/en/stable/)
# Now to display an image you can either save it such as:
image.save(f"cat.png")
# or if you're in a google colab you can directly display it with
image
```
|
c-g/q-FrozenLake-v1-4x4-noSlippery
|
c-g
| 2023-09-17T10:35:08Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-17T10:35:06Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="c-g/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Daddy458/dream
|
Daddy458
| 2023-09-17T10:29:42Z | 3 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:finetune:stabilityai/stable-diffusion-2-1-base",
"region:us"
] |
text-to-image
| 2023-09-17T09:48:20Z |
---
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: photo of AJ
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
wanzhenen/sd-class-butterflies-32
|
wanzhenen
| 2023-09-17T10:24:48Z | 44 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-09-17T10:24:44Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('wanzhenen/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
lsoni/bert-finetuned-ner-synonym-replacement-model
|
lsoni
| 2023-09-17T10:24:37Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:lsoni/combined_tweetner7_synonym_replacement_augmented_dataset",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-12T15:43:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-synonym-replacement-model
results: []
datasets:
- lsoni/combined_tweetner7_synonym_replacement_augmented_dataset
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-synonym-replacement-model
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the combined training dataset(tweetner7(train_2021)+augmented dataset(train_2021) using synonym replacment
technique (lsoni/combined_tweetner7_synonym_replacement_augmented_dataset) and dataset used for evaluation is combined evaluation dataset(tweetner7(validation_2021)+augmented dataset(validation_2021)
using synonym replacment technique (lsoni/combined_tweetner7_synonym_replacement_augmented_dataset_eval).
It achieves the following results on the evaluation set:
- Loss: 0.4484
- Precision: 0.6804
- Recall: 0.6727
- F1: 0.6765
- Accuracy: 0.8780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.5525 | 1.0 | 624 | 0.4142 | 0.7040 | 0.6120 | 0.6548 | 0.8774 |
| 0.3293 | 2.0 | 1248 | 0.4101 | 0.7067 | 0.6628 | 0.6841 | 0.8833 |
| 0.2536 | 3.0 | 1872 | 0.4484 | 0.6804 | 0.6727 | 0.6765 | 0.8780 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.1
- Datasets 2.10.1
- Tokenizers 0.12.1
|
s3nh/PY007-TinyLlama-1.1B-Chat-v0.2-GGUF
|
s3nh
| 2023-09-17T09:52:54Z | 17 | 7 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-17T09:51:58Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/PY007/TinyLlama-1.1B-Chat-v0.2).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
nikcheerla/amd-model-v5
|
nikcheerla
| 2023-09-17T09:39:12Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-09-17T09:39:02Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# nikcheerla/amd-model-v5
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("nikcheerla/amd-model-v5")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
a2ran/FingerFriend-t5-base
|
a2ran
| 2023-09-17T09:36:15Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:eenzeenee/t5-base-korean-summarization",
"base_model:finetune:eenzeenee/t5-base-korean-summarization",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-17T09:28:55Z |
---
base_model: eenzeenee/t5-base-korean-summarization
tags:
- generated_from_trainer
model-index:
- name: FingerFriend-t5-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FingerFriend-t5-base
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.87 | 1.0 | 683 | 0.5576 |
| 0.5197 | 2.0 | 1366 | 0.4856 |
| 0.4303 | 3.0 | 2049 | 0.4572 |
| 0.373 | 4.0 | 2732 | 0.4446 |
| 0.332 | 5.0 | 3415 | 0.4330 |
| 0.2961 | 6.0 | 4098 | 0.4322 |
| 0.2673 | 7.0 | 4781 | 0.4406 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
juniam/alephbertgimmel-finetuned-parashootandHeQ
|
juniam
| 2023-09-17T09:10:55Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"he",
"dataset:imvladikon/parashoot",
"dataset:pig4431/HeQ_v1",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-09-17T09:03:54Z |
---
license: cc-by-4.0
datasets:
- imvladikon/parashoot
- pig4431/HeQ_v1
language:
- he
library_name: transformers
---
|
araffin/ppo-MountainCarContinuous-v0
|
araffin
| 2023-09-17T09:06:26Z | 5 | 1 |
stable-baselines3
|
[
"stable-baselines3",
"MountainCarContinuous-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-17T09:04:56Z |
---
library_name: stable-baselines3
tags:
- MountainCarContinuous-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCarContinuous-v0
type: MountainCarContinuous-v0
metrics:
- type: mean_reward
value: -1.16 +/- 0.05
name: mean_reward
verified: false
---
# **PPO** Agent playing **MountainCarContinuous-v0**
This is a trained model of a **PPO** agent playing **MountainCarContinuous-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env MountainCarContinuous-v0 -orga araffin -f logs/
python -m rl_zoo3.enjoy --algo ppo --env MountainCarContinuous-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env MountainCarContinuous-v0 -orga araffin -f logs/
python -m rl_zoo3.enjoy --algo ppo --env MountainCarContinuous-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env MountainCarContinuous-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env MountainCarContinuous-v0 -f logs/ -orga araffin
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 0.1),
('ent_coef', 0.00429),
('gae_lambda', 0.9),
('gamma', 0.9999),
('learning_rate', 7.77e-05),
('max_grad_norm', 5),
('n_envs', 1),
('n_epochs', 10),
('n_steps', 8),
('n_timesteps', 20000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-3.29, ortho_init=False)'),
('use_sde', True),
('vf_coef', 0.19),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
pittawat/vit-base-letter
|
pittawat
| 2023-09-17T09:01:40Z | 46 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"en",
"dataset:pittawat/letter_recognition",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-03-20T11:59:23Z |
---
language:
- en
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- pittawat/letter_recognition
metrics:
- accuracy
base_model: google/vit-base-patch16-224-in21k
model-index:
- name: vit-base-letter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-letter
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the pittawat/letter_recognition dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0515
- Accuracy: 0.9881
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5539 | 0.12 | 100 | 0.5576 | 0.9308 |
| 0.2688 | 0.25 | 200 | 0.2371 | 0.9665 |
| 0.1568 | 0.37 | 300 | 0.1829 | 0.9688 |
| 0.1684 | 0.49 | 400 | 0.1611 | 0.9662 |
| 0.1584 | 0.62 | 500 | 0.1340 | 0.9673 |
| 0.1569 | 0.74 | 600 | 0.1933 | 0.9531 |
| 0.0992 | 0.86 | 700 | 0.1031 | 0.9781 |
| 0.0573 | 0.98 | 800 | 0.1024 | 0.9781 |
| 0.0359 | 1.11 | 900 | 0.0950 | 0.9804 |
| 0.0961 | 1.23 | 1000 | 0.1200 | 0.9723 |
| 0.0334 | 1.35 | 1100 | 0.0995 | 0.975 |
| 0.0855 | 1.48 | 1200 | 0.0791 | 0.9815 |
| 0.0902 | 1.6 | 1300 | 0.0981 | 0.9765 |
| 0.0583 | 1.72 | 1400 | 0.1192 | 0.9712 |
| 0.0683 | 1.85 | 1500 | 0.0692 | 0.9846 |
| 0.1188 | 1.97 | 1600 | 0.0931 | 0.9785 |
| 0.0366 | 2.09 | 1700 | 0.0919 | 0.9804 |
| 0.0276 | 2.21 | 1800 | 0.0667 | 0.9846 |
| 0.0309 | 2.34 | 1900 | 0.0599 | 0.9858 |
| 0.0183 | 2.46 | 2000 | 0.0892 | 0.9769 |
| 0.0431 | 2.58 | 2100 | 0.0663 | 0.985 |
| 0.0424 | 2.71 | 2200 | 0.0643 | 0.9862 |
| 0.0453 | 2.83 | 2300 | 0.0646 | 0.9862 |
| 0.0528 | 2.95 | 2400 | 0.0550 | 0.985 |
| 0.0045 | 3.08 | 2500 | 0.0579 | 0.9846 |
| 0.007 | 3.2 | 2600 | 0.0517 | 0.9885 |
| 0.0048 | 3.32 | 2700 | 0.0584 | 0.9865 |
| 0.019 | 3.44 | 2800 | 0.0560 | 0.9873 |
| 0.0038 | 3.57 | 2900 | 0.0515 | 0.9881 |
| 0.0219 | 3.69 | 3000 | 0.0527 | 0.9881 |
| 0.0117 | 3.81 | 3100 | 0.0523 | 0.9888 |
| 0.0035 | 3.94 | 3200 | 0.0559 | 0.9865 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
DifeiT/rsna-intracranial-hemorrhage-detection
|
DifeiT
| 2023-09-17T08:53:12Z | 26 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-17T03:45:13Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: rsna-intracranial-hemorrhage-detection
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.6151724137931035
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rsna-intracranial-hemorrhage-detection
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2164
- Accuracy: 0.6152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.5655 | 1.0 | 238 | 1.5235 | 0.4039 |
| 1.3848 | 2.0 | 477 | 1.3622 | 0.4692 |
| 1.2812 | 3.0 | 716 | 1.2811 | 0.5150 |
| 1.2039 | 4.0 | 955 | 1.1795 | 0.5556 |
| 1.1641 | 5.0 | 1193 | 1.1627 | 0.5534 |
| 1.1961 | 6.0 | 1432 | 1.1393 | 0.5705 |
| 1.1382 | 7.0 | 1671 | 1.0921 | 0.5804 |
| 0.9653 | 8.0 | 1910 | 1.0790 | 0.5876 |
| 0.9346 | 9.0 | 2148 | 1.0727 | 0.5931 |
| 0.9083 | 10.0 | 2387 | 1.0605 | 0.5994 |
| 0.8936 | 11.0 | 2626 | 1.0147 | 0.6146 |
| 0.8504 | 12.0 | 2865 | 1.0849 | 0.5818 |
| 0.8544 | 13.0 | 3103 | 1.0349 | 0.6052 |
| 0.7884 | 14.0 | 3342 | 1.0435 | 0.6074 |
| 0.7974 | 15.0 | 3581 | 1.0082 | 0.6127 |
| 0.7921 | 16.0 | 3820 | 1.0438 | 0.6017 |
| 0.709 | 17.0 | 4058 | 1.0484 | 0.6094 |
| 0.6646 | 18.0 | 4297 | 1.0554 | 0.6221 |
| 0.6832 | 19.0 | 4536 | 1.0455 | 0.6124 |
| 0.7076 | 20.0 | 4775 | 1.0905 | 0.6 |
| 0.7442 | 21.0 | 5013 | 1.1094 | 0.6008 |
| 0.6332 | 22.0 | 5252 | 1.0777 | 0.6063 |
| 0.6417 | 23.0 | 5491 | 1.0765 | 0.6141 |
| 0.6267 | 24.0 | 5730 | 1.1057 | 0.6091 |
| 0.6082 | 25.0 | 5968 | 1.0962 | 0.6171 |
| 0.6191 | 26.0 | 6207 | 1.1178 | 0.6039 |
| 0.5654 | 27.0 | 6446 | 1.1386 | 0.5948 |
| 0.5776 | 28.0 | 6685 | 1.1121 | 0.6105 |
| 0.5531 | 29.0 | 6923 | 1.1497 | 0.6030 |
| 0.6275 | 30.0 | 7162 | 1.1796 | 0.6028 |
| 0.5373 | 31.0 | 7401 | 1.1306 | 0.6132 |
| 0.4775 | 32.0 | 7640 | 1.1523 | 0.6058 |
| 0.5469 | 33.0 | 7878 | 1.1634 | 0.6127 |
| 0.4934 | 34.0 | 8117 | 1.1853 | 0.616 |
| 0.5233 | 35.0 | 8356 | 1.2018 | 0.6055 |
| 0.4896 | 36.0 | 8595 | 1.1585 | 0.6108 |
| 0.5122 | 37.0 | 8833 | 1.1874 | 0.6146 |
| 0.4726 | 38.0 | 9072 | 1.1608 | 0.6193 |
| 0.4372 | 39.0 | 9311 | 1.2403 | 0.6132 |
| 0.498 | 40.0 | 9550 | 1.1752 | 0.6201 |
| 0.4813 | 41.0 | 9788 | 1.2005 | 0.6166 |
| 0.4762 | 42.0 | 10027 | 1.2285 | 0.6022 |
| 0.4852 | 43.0 | 10266 | 1.2192 | 0.6119 |
| 0.4332 | 44.0 | 10505 | 1.2391 | 0.6218 |
| 0.3998 | 45.0 | 10743 | 1.1779 | 0.6196 |
| 0.4467 | 46.0 | 10982 | 1.2048 | 0.6284 |
| 0.4332 | 47.0 | 11221 | 1.2302 | 0.6188 |
| 0.4529 | 48.0 | 11460 | 1.2220 | 0.6188 |
| 0.4281 | 49.0 | 11698 | 1.2013 | 0.624 |
| 0.4199 | 49.84 | 11900 | 1.2164 | 0.6152 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Yurio27/Clase_02
|
Yurio27
| 2023-09-17T08:51:41Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-17T08:48:51Z |
---
license: creativeml-openrail-m
---
|
a2ran/FingerFriend-t5-small
|
a2ran
| 2023-09-17T07:54:16Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-16T14:39:34Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: FingerFriend-t5-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FingerFriend-t5-small
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6293 | 1.0 | 171 | 1.1671 |
| 1.195 | 2.0 | 342 | 1.0246 |
| 1.085 | 3.0 | 513 | 0.9553 |
| 1.0207 | 4.0 | 684 | 0.9096 |
| 0.9631 | 5.0 | 855 | 0.8782 |
| 0.9283 | 6.0 | 1026 | 0.8445 |
| 0.8987 | 7.0 | 1197 | 0.8352 |
| 0.8716 | 8.0 | 1368 | 0.8123 |
| 0.8556 | 9.0 | 1539 | 0.7983 |
| 0.8375 | 10.0 | 1710 | 0.7923 |
| 0.8239 | 11.0 | 1881 | 0.7757 |
| 0.8184 | 12.0 | 2052 | 0.7716 |
| 0.8053 | 13.0 | 2223 | 0.7642 |
| 0.7929 | 14.0 | 2394 | 0.7647 |
| 0.7867 | 15.0 | 2565 | 0.7597 |
| 0.7817 | 16.0 | 2736 | 0.7529 |
| 0.7751 | 17.0 | 2907 | 0.7506 |
| 0.7705 | 18.0 | 3078 | 0.7472 |
| 0.7657 | 19.0 | 3249 | 0.7467 |
| 0.7665 | 20.0 | 3420 | 0.7464 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
mohaq123/setfit-minilm-distilled
|
mohaq123
| 2023-09-17T07:47:57Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-09-17T07:47:53Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# mohaq123/setfit-minilm-distilled
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("mohaq123/setfit-minilm-distilled")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
CyberHarem/oikawa_shizuku_idolmastercinderellagirls
|
CyberHarem
| 2023-09-17T07:44:20Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/oikawa_shizuku_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-17T07:26:56Z |
---
license: mit
datasets:
- CyberHarem/oikawa_shizuku_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of oikawa_shizuku_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 7280, you need to download `7280/oikawa_shizuku_idolmastercinderellagirls.pt` as the embedding and `7280/oikawa_shizuku_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 7280**, with the score of 0.680. The trigger words are:
1. `oikawa_shizuku_idolmastercinderellagirls`
2. `short_hair, brown_eyes, brown_hair, breasts, blush, smile, open_mouth, large_breasts, animal_ears, cow_horns, horns, cow_ears, neck_bell`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-----------------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7800 | 0.641 | [Download](7800/oikawa_shizuku_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](7800/previews/pattern_2.png) | [<NSFW, click to see>](7800/previews/pattern_3.png) |  |  | [<NSFW, click to see>](7800/previews/pattern_6.png) | [<NSFW, click to see>](7800/previews/pattern_7.png) |  | [<NSFW, click to see>](7800/previews/bikini.png) | [<NSFW, click to see>](7800/previews/bondage.png) | [<NSFW, click to see>](7800/previews/free.png) |  |  | [<NSFW, click to see>](7800/previews/nude.png) | [<NSFW, click to see>](7800/previews/nude2.png) |  |  |
| **7280** | **0.680** | [**Download**](7280/oikawa_shizuku_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](7280/previews/pattern_2.png) | [<NSFW, click to see>](7280/previews/pattern_3.png) |  |  | [<NSFW, click to see>](7280/previews/pattern_6.png) | [<NSFW, click to see>](7280/previews/pattern_7.png) |  | [<NSFW, click to see>](7280/previews/bikini.png) | [<NSFW, click to see>](7280/previews/bondage.png) | [<NSFW, click to see>](7280/previews/free.png) |  |  | [<NSFW, click to see>](7280/previews/nude.png) | [<NSFW, click to see>](7280/previews/nude2.png) |  |  |
| 6760 | 0.561 | [Download](6760/oikawa_shizuku_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](6760/previews/pattern_2.png) | [<NSFW, click to see>](6760/previews/pattern_3.png) |  |  | [<NSFW, click to see>](6760/previews/pattern_6.png) | [<NSFW, click to see>](6760/previews/pattern_7.png) |  | [<NSFW, click to see>](6760/previews/bikini.png) | [<NSFW, click to see>](6760/previews/bondage.png) | [<NSFW, click to see>](6760/previews/free.png) |  |  | [<NSFW, click to see>](6760/previews/nude.png) | [<NSFW, click to see>](6760/previews/nude2.png) |  |  |
| 6240 | 0.646 | [Download](6240/oikawa_shizuku_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](6240/previews/pattern_2.png) | [<NSFW, click to see>](6240/previews/pattern_3.png) |  |  | [<NSFW, click to see>](6240/previews/pattern_6.png) | [<NSFW, click to see>](6240/previews/pattern_7.png) |  | [<NSFW, click to see>](6240/previews/bikini.png) | [<NSFW, click to see>](6240/previews/bondage.png) | [<NSFW, click to see>](6240/previews/free.png) |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5720 | 0.540 | [Download](5720/oikawa_shizuku_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](5720/previews/pattern_2.png) | [<NSFW, click to see>](5720/previews/pattern_3.png) |  |  | [<NSFW, click to see>](5720/previews/pattern_6.png) | [<NSFW, click to see>](5720/previews/pattern_7.png) |  | [<NSFW, click to see>](5720/previews/bikini.png) | [<NSFW, click to see>](5720/previews/bondage.png) | [<NSFW, click to see>](5720/previews/free.png) |  |  | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) |  |  |
| 5200 | 0.577 | [Download](5200/oikawa_shizuku_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](5200/previews/pattern_2.png) | [<NSFW, click to see>](5200/previews/pattern_3.png) |  |  | [<NSFW, click to see>](5200/previews/pattern_6.png) | [<NSFW, click to see>](5200/previews/pattern_7.png) |  | [<NSFW, click to see>](5200/previews/bikini.png) | [<NSFW, click to see>](5200/previews/bondage.png) | [<NSFW, click to see>](5200/previews/free.png) |  |  | [<NSFW, click to see>](5200/previews/nude.png) | [<NSFW, click to see>](5200/previews/nude2.png) |  |  |
| 4680 | 0.590 | [Download](4680/oikawa_shizuku_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](4680/previews/pattern_2.png) | [<NSFW, click to see>](4680/previews/pattern_3.png) |  |  | [<NSFW, click to see>](4680/previews/pattern_6.png) | [<NSFW, click to see>](4680/previews/pattern_7.png) |  | [<NSFW, click to see>](4680/previews/bikini.png) | [<NSFW, click to see>](4680/previews/bondage.png) | [<NSFW, click to see>](4680/previews/free.png) |  |  | [<NSFW, click to see>](4680/previews/nude.png) | [<NSFW, click to see>](4680/previews/nude2.png) |  |  |
| 4160 | 0.606 | [Download](4160/oikawa_shizuku_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](4160/previews/pattern_2.png) | [<NSFW, click to see>](4160/previews/pattern_3.png) |  |  | [<NSFW, click to see>](4160/previews/pattern_6.png) | [<NSFW, click to see>](4160/previews/pattern_7.png) |  | [<NSFW, click to see>](4160/previews/bikini.png) | [<NSFW, click to see>](4160/previews/bondage.png) | [<NSFW, click to see>](4160/previews/free.png) |  |  | [<NSFW, click to see>](4160/previews/nude.png) | [<NSFW, click to see>](4160/previews/nude2.png) |  |  |
| 3640 | 0.555 | [Download](3640/oikawa_shizuku_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](3640/previews/pattern_2.png) | [<NSFW, click to see>](3640/previews/pattern_3.png) |  |  | [<NSFW, click to see>](3640/previews/pattern_6.png) | [<NSFW, click to see>](3640/previews/pattern_7.png) |  | [<NSFW, click to see>](3640/previews/bikini.png) | [<NSFW, click to see>](3640/previews/bondage.png) | [<NSFW, click to see>](3640/previews/free.png) |  |  | [<NSFW, click to see>](3640/previews/nude.png) | [<NSFW, click to see>](3640/previews/nude2.png) |  |  |
| 3120 | 0.618 | [Download](3120/oikawa_shizuku_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](3120/previews/pattern_2.png) | [<NSFW, click to see>](3120/previews/pattern_3.png) |  |  | [<NSFW, click to see>](3120/previews/pattern_6.png) | [<NSFW, click to see>](3120/previews/pattern_7.png) |  | [<NSFW, click to see>](3120/previews/bikini.png) | [<NSFW, click to see>](3120/previews/bondage.png) | [<NSFW, click to see>](3120/previews/free.png) |  |  | [<NSFW, click to see>](3120/previews/nude.png) | [<NSFW, click to see>](3120/previews/nude2.png) |  |  |
| 2600 | 0.579 | [Download](2600/oikawa_shizuku_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](2600/previews/pattern_2.png) | [<NSFW, click to see>](2600/previews/pattern_3.png) |  |  | [<NSFW, click to see>](2600/previews/pattern_6.png) | [<NSFW, click to see>](2600/previews/pattern_7.png) |  | [<NSFW, click to see>](2600/previews/bikini.png) | [<NSFW, click to see>](2600/previews/bondage.png) | [<NSFW, click to see>](2600/previews/free.png) |  |  | [<NSFW, click to see>](2600/previews/nude.png) | [<NSFW, click to see>](2600/previews/nude2.png) |  |  |
| 2080 | 0.533 | [Download](2080/oikawa_shizuku_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](2080/previews/pattern_2.png) | [<NSFW, click to see>](2080/previews/pattern_3.png) |  |  | [<NSFW, click to see>](2080/previews/pattern_6.png) | [<NSFW, click to see>](2080/previews/pattern_7.png) |  | [<NSFW, click to see>](2080/previews/bikini.png) | [<NSFW, click to see>](2080/previews/bondage.png) | [<NSFW, click to see>](2080/previews/free.png) |  |  | [<NSFW, click to see>](2080/previews/nude.png) | [<NSFW, click to see>](2080/previews/nude2.png) |  |  |
| 1560 | 0.447 | [Download](1560/oikawa_shizuku_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](1560/previews/pattern_2.png) | [<NSFW, click to see>](1560/previews/pattern_3.png) |  |  | [<NSFW, click to see>](1560/previews/pattern_6.png) | [<NSFW, click to see>](1560/previews/pattern_7.png) |  | [<NSFW, click to see>](1560/previews/bikini.png) | [<NSFW, click to see>](1560/previews/bondage.png) | [<NSFW, click to see>](1560/previews/free.png) |  |  | [<NSFW, click to see>](1560/previews/nude.png) | [<NSFW, click to see>](1560/previews/nude2.png) |  |  |
| 1040 | 0.464 | [Download](1040/oikawa_shizuku_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](1040/previews/pattern_2.png) | [<NSFW, click to see>](1040/previews/pattern_3.png) |  |  | [<NSFW, click to see>](1040/previews/pattern_6.png) | [<NSFW, click to see>](1040/previews/pattern_7.png) |  | [<NSFW, click to see>](1040/previews/bikini.png) | [<NSFW, click to see>](1040/previews/bondage.png) | [<NSFW, click to see>](1040/previews/free.png) |  |  | [<NSFW, click to see>](1040/previews/nude.png) | [<NSFW, click to see>](1040/previews/nude2.png) |  |  |
| 520 | 0.293 | [Download](520/oikawa_shizuku_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](520/previews/pattern_2.png) | [<NSFW, click to see>](520/previews/pattern_3.png) |  |  | [<NSFW, click to see>](520/previews/pattern_6.png) | [<NSFW, click to see>](520/previews/pattern_7.png) |  | [<NSFW, click to see>](520/previews/bikini.png) | [<NSFW, click to see>](520/previews/bondage.png) | [<NSFW, click to see>](520/previews/free.png) |  |  | [<NSFW, click to see>](520/previews/nude.png) | [<NSFW, click to see>](520/previews/nude2.png) |  |  |
|
Yntec/aPhotographicTrend
|
Yntec
| 2023-09-17T07:37:08Z | 681 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"Ciro_Negrogni",
"MagicArt35",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-16T12:13:13Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- Ciro_Negrogni
- MagicArt35
---
# A Photographic Trend
AmovieX by MagicArt35 with the Photographic Trend LoRA by Ciro_Negrogni baked in. First version of three.
Second version with AmovieX's compositions: https://huggingface.co/Yntec/aMovieTrend
Third version with Photographic Trend's compositions: https://huggingface.co/Yntec/Trending
Samples and prompt:

Photo Pretty Cute Girl, highly detailed, trending on ArtStation, sitting, fantasy, beautiful detailed streetwear, gorgeous detailed hair, hat, Magazine ad, iconic, 1943, from the movie, sharp focus. Detailed masterpiece,

Cartoon CUTE LITTLE baby, CHIBI, gorgeous detailed hair, looking, cute socks, holding pillow, skirt, Magazine ad, iconic, 1940, sharp focus. pencil art By KlaysMoji and Clay Mann and and leyendecker and Dave Rapoza.
Original pages:
https://civitai.com/models/98543 (Photographic Trend)
https://civitai.com/models/94687/photo-movie-x (AmovieX)
# Recipe
- Merge Photographic Trend LoRA to checkpoint 1.0
Model A:
AmovieX
OutPut:
PhotographicTrendAmovieX
- SuperMerger Weight sum Train Difference use MBW 0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1
Model A:
PhotographicTrendAmovieX
Model B:
AmovieX
OutPut:
aPhotographicTrend
|
wooii/DQN-SpaceInvadersNoFrameskip-v4
|
wooii
| 2023-09-17T07:37:04Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-17T06:14:11Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 587.00 +/- 118.37
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga wooii -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga wooii -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga wooii
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 10000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
hrtnisri2016/image_classification
|
hrtnisri2016
| 2023-09-17T07:25:05Z | 217 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-17T02:10:18Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.46875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5771
- Accuracy: 0.4688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 1.9643 | 0.3438 |
| No log | 2.0 | 40 | 1.7819 | 0.4125 |
| No log | 3.0 | 60 | 1.6521 | 0.4562 |
| No log | 4.0 | 80 | 1.6034 | 0.4938 |
| No log | 5.0 | 100 | 1.5769 | 0.5062 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Daniil-plotnikov/Russian-Vision-V5.2
|
Daniil-plotnikov
| 2023-09-17T07:14:29Z | 41 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"ru",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-30T13:40:03Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
language:
- ru
- en
---
### Данная модель лучшая на данный момент!
Понимает русский и английский.
|
BBBBirdIsTheWord/ppo-LunarLander-v2
|
BBBBirdIsTheWord
| 2023-09-17T07:11:56Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-17T03:10:26Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 293.55 +/- 23.47
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nguyenthanhdo/dolphin_noprob
|
nguyenthanhdo
| 2023-09-17T06:56:57Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-13T18:59:24Z |
## This model is tailored for question answering with context. How to use:
---
```py
import torch
import os
from transformers import LlamaForCausalLM, LlamaTokenizer, LlamaConfig
from transformers import GenerationConfig, TextStreamer
from peft import PeftModel
from axolotl.prompters import AlpacaPrompter, PromptStyle
### Load model
torch_dtype = torch.bfloat16
device_map = {"": int(os.environ.get("CUDA_DEVICE") or 0)}
# model_id = "nguyenthanhdo/noprob_model" # you may try this
model_id = "NousResearch/Llama-2-7b-hf"
peft_id = "nguyenthanhdo/dolphin_noprob"
tokenizer = LlamaTokenizer.from_pretrained(model_id)
model = LlamaForCausalLM.from_pretrained(
model_id,
config=LlamaConfig.from_pretrained(model_id),
device_map=device_map,
torch_dtype=torch_dtype
)
model = PeftModel.from_pretrained(model, peft_id)
model = model.merge_and_unload()
### Build prompt
prompter = AlpacaPrompter(prompt_style=PromptStyle.INSTRUCT.value)
# instruction = "Provide short and concise answer. The answer should be straight and only provides explanation when needed." # Another instruction to test
instruction = 'You are an AI assistant. Provide a detailed answer so user don’t need to search outside to understand the answer.'
question = input()
context = input()
input = f"""Dựa vào bài viết dưới đây, trả lời câu hỏi phía dưới:\n{context}\n\nCâu hỏi: {question}"""
prompt = prompter.build_prompt(instruction=instruction, input=input, output="").__next__()
### Generate answer
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(model.device)
model.eval()
with torch.no_grad():
generation_config = GenerationConfig(
repetition_penalty=1.13,
max_new_tokens=max_new_tokens,
temperature=0.2,
top_p=0.95,
top_k=20,
pad_token_id=tokenizer.pad_token_id,
do_sample=True,
use_cache=True,
return_dict_in_generate=True,
output_attentions=False,
output_hidden_states=False,
output_scores=False,
)
streamer = TextStreamer(tokenizer, skip_prompt=True)
generated = model.generate(
inputs=input_ids,
generation_config=generation_config,
streamer=streamer,
)
```
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
nailashfrni/emotion_classification
|
nailashfrni
| 2023-09-17T06:42:39Z | 196 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-17T06:35:03Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: emotion_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.51875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4178
- Accuracy: 0.5188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.3316 | 0.4562 |
| No log | 2.0 | 80 | 1.3601 | 0.5 |
| No log | 3.0 | 120 | 1.2794 | 0.5563 |
| No log | 4.0 | 160 | 1.3851 | 0.5 |
| No log | 5.0 | 200 | 1.4786 | 0.4625 |
| No log | 6.0 | 240 | 1.4805 | 0.4875 |
| No log | 7.0 | 280 | 1.4581 | 0.4813 |
| No log | 8.0 | 320 | 1.4258 | 0.525 |
| No log | 9.0 | 360 | 1.5452 | 0.5 |
| No log | 10.0 | 400 | 1.3624 | 0.575 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
nailashfrni/image_classification
|
nailashfrni
| 2023-09-17T06:27:34Z | 195 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-17T05:26:41Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9420289855072463
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1728
- Accuracy: 0.9420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 52 | 0.2885 | 0.9179 |
| No log | 2.0 | 104 | 0.1829 | 0.9469 |
| No log | 3.0 | 156 | 0.1789 | 0.9565 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
okaris/autotrain-hate-speech-3k-89642143970
|
okaris
| 2023-09-17T06:23:59Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"deberta",
"text-classification",
"autotrain",
"text-regression",
"en",
"dataset:okaris/autotrain-data-hate-speech-3k",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-17T06:21:33Z |
---
tags:
- autotrain
- text-regression
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- okaris/autotrain-data-hate-speech-3k
co2_eq_emissions:
emissions: 0.023898445665108296
---
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 89642143970
- CO2 Emissions (in grams): 0.0239
## Validation Metrics
- Loss: 1.768
- MSE: 1.768
- MAE: 1.007
- R2: 0.604
- RMSE: 1.330
- Explained Variance: 0.614
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/okaris/autotrain-hate-speech-3k-89642143970
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("okaris/autotrain-hate-speech-3k-89642143970", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("okaris/autotrain-hate-speech-3k-89642143970", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
ikeus/xlm-roberta-base-finetuned-panx-de-fr
|
ikeus
| 2023-09-17T06:17:17Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-17T05:50:48Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1658
- F1: 0.8593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2889 | 1.0 | 715 | 0.1777 | 0.8220 |
| 0.1479 | 2.0 | 1430 | 0.1630 | 0.8451 |
| 0.0948 | 3.0 | 2145 | 0.1658 | 0.8593 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
MaX0214/classification
|
MaX0214
| 2023-09-17T06:11:41Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-17T03:54:37Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Jiuzhouh/flan-t5-xxl-lora-copasse-new
|
Jiuzhouh
| 2023-09-17T05:58:03Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-17T05:57:54Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
BBBBirdIsTheWord/ppo-Huggy
|
BBBBirdIsTheWord
| 2023-09-17T05:56:59Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-09-17T05:56:53Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: BBBBirdIsTheWord/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AeraX-Valley/crash_detection_resnet-50
|
AeraX-Valley
| 2023-09-17T05:35:35Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-17T05:22:41Z |
# accilanews-detection
## our first detection model
### architecture: resnet50
|
m-aliabbas1/fine_tune_bert_output
|
m-aliabbas1
| 2023-09-17T05:26:49Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:prajjwal1/bert-tiny",
"base_model:finetune:prajjwal1/bert-tiny",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-17T05:22:48Z |
---
license: mit
base_model: prajjwal1/bert-tiny
tags:
- generated_from_trainer
model-index:
- name: fine_tune_bert_output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tune_bert_output
This model is a fine-tuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0094
- Overall Precision: 0.9722
- Overall Recall: 0.9722
- Overall F1: 0.9722
- Overall Accuracy: 0.9963
- Number Of Employees F1: 0.9722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | Number Of Employees F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|:----------------------:|
| 0.0011 | 50.0 | 1000 | 0.0046 | 0.9722 | 0.9722 | 0.9722 | 0.9963 | 0.9722 |
| 0.0003 | 100.0 | 2000 | 0.0004 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0002 | 150.0 | 3000 | 0.0094 | 0.9722 | 0.9722 | 0.9722 | 0.9963 | 0.9722 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
### Labels IDs
- {0: 'O', 1: 'B-number_of_employees', 2: 'I-number_of_employees'}
|
CyberHarem/akagi_miria_idolmastercinderellagirls
|
CyberHarem
| 2023-09-17T05:20:46Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/akagi_miria_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-17T05:05:57Z |
---
license: mit
datasets:
- CyberHarem/akagi_miria_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of akagi_miria_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 6480, you need to download `6480/akagi_miria_idolmastercinderellagirls.pt` as the embedding and `6480/akagi_miria_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 6480**, with the score of 0.978. The trigger words are:
1. `akagi_miria_idolmastercinderellagirls`
2. `short_hair, brown_eyes, two_side_up, blush, smile, black_hair, open_mouth, brown_hair, :d, bangs, hair_ornament`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:---------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-----------------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 8100 | 0.977 | [Download](8100/akagi_miria_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](8100/previews/bikini.png) | [<NSFW, click to see>](8100/previews/bondage.png) | [<NSFW, click to see>](8100/previews/free.png) |  |  | [<NSFW, click to see>](8100/previews/nude.png) | [<NSFW, click to see>](8100/previews/nude2.png) |  |  |
| 7560 | 0.974 | [Download](7560/akagi_miria_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](7560/previews/bikini.png) | [<NSFW, click to see>](7560/previews/bondage.png) | [<NSFW, click to see>](7560/previews/free.png) |  |  | [<NSFW, click to see>](7560/previews/nude.png) | [<NSFW, click to see>](7560/previews/nude2.png) |  |  |
| 7020 | 0.978 | [Download](7020/akagi_miria_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](7020/previews/bikini.png) | [<NSFW, click to see>](7020/previews/bondage.png) | [<NSFW, click to see>](7020/previews/free.png) |  |  | [<NSFW, click to see>](7020/previews/nude.png) | [<NSFW, click to see>](7020/previews/nude2.png) |  |  |
| **6480** | **0.978** | [**Download**](6480/akagi_miria_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](6480/previews/bikini.png) | [<NSFW, click to see>](6480/previews/bondage.png) | [<NSFW, click to see>](6480/previews/free.png) |  |  | [<NSFW, click to see>](6480/previews/nude.png) | [<NSFW, click to see>](6480/previews/nude2.png) |  |  |
| 5940 | 0.970 | [Download](5940/akagi_miria_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](5940/previews/bikini.png) | [<NSFW, click to see>](5940/previews/bondage.png) | [<NSFW, click to see>](5940/previews/free.png) |  |  | [<NSFW, click to see>](5940/previews/nude.png) | [<NSFW, click to see>](5940/previews/nude2.png) |  |  |
| 5400 | 0.976 | [Download](5400/akagi_miria_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](5400/previews/bikini.png) | [<NSFW, click to see>](5400/previews/bondage.png) | [<NSFW, click to see>](5400/previews/free.png) |  |  | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) |  |  |
| 4860 | 0.958 | [Download](4860/akagi_miria_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](4860/previews/bikini.png) | [<NSFW, click to see>](4860/previews/bondage.png) | [<NSFW, click to see>](4860/previews/free.png) |  |  | [<NSFW, click to see>](4860/previews/nude.png) | [<NSFW, click to see>](4860/previews/nude2.png) |  |  |
| 4320 | 0.968 | [Download](4320/akagi_miria_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](4320/previews/bikini.png) | [<NSFW, click to see>](4320/previews/bondage.png) | [<NSFW, click to see>](4320/previews/free.png) |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3780 | 0.970 | [Download](3780/akagi_miria_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](3780/previews/bikini.png) | [<NSFW, click to see>](3780/previews/bondage.png) | [<NSFW, click to see>](3780/previews/free.png) |  |  | [<NSFW, click to see>](3780/previews/nude.png) | [<NSFW, click to see>](3780/previews/nude2.png) |  |  |
| 3240 | 0.969 | [Download](3240/akagi_miria_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](3240/previews/bikini.png) | [<NSFW, click to see>](3240/previews/bondage.png) | [<NSFW, click to see>](3240/previews/free.png) |  |  | [<NSFW, click to see>](3240/previews/nude.png) | [<NSFW, click to see>](3240/previews/nude2.png) |  |  |
| 2700 | 0.969 | [Download](2700/akagi_miria_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](2700/previews/bikini.png) | [<NSFW, click to see>](2700/previews/bondage.png) | [<NSFW, click to see>](2700/previews/free.png) |  |  | [<NSFW, click to see>](2700/previews/nude.png) | [<NSFW, click to see>](2700/previews/nude2.png) |  |  |
| 2160 | 0.956 | [Download](2160/akagi_miria_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](2160/previews/bikini.png) | [<NSFW, click to see>](2160/previews/bondage.png) | [<NSFW, click to see>](2160/previews/free.png) |  |  | [<NSFW, click to see>](2160/previews/nude.png) | [<NSFW, click to see>](2160/previews/nude2.png) |  |  |
| 1620 | 0.926 | [Download](1620/akagi_miria_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](1620/previews/bikini.png) | [<NSFW, click to see>](1620/previews/bondage.png) | [<NSFW, click to see>](1620/previews/free.png) |  |  | [<NSFW, click to see>](1620/previews/nude.png) | [<NSFW, click to see>](1620/previews/nude2.png) |  |  |
| 1080 | 0.941 | [Download](1080/akagi_miria_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](1080/previews/bikini.png) | [<NSFW, click to see>](1080/previews/bondage.png) | [<NSFW, click to see>](1080/previews/free.png) |  |  | [<NSFW, click to see>](1080/previews/nude.png) | [<NSFW, click to see>](1080/previews/nude2.png) |  |  |
| 540 | 0.940 | [Download](540/akagi_miria_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](540/previews/bikini.png) | [<NSFW, click to see>](540/previews/bondage.png) | [<NSFW, click to see>](540/previews/free.png) |  |  | [<NSFW, click to see>](540/previews/nude.png) | [<NSFW, click to see>](540/previews/nude2.png) |  |  |
|
Darojat/Darojat
|
Darojat
| 2023-09-17T05:12:46Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-09-17T05:12:16Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
swaroopajit/git-base-next-temp
|
swaroopajit
| 2023-09-17T04:57:27Z | 63 | 0 |
transformers
|
[
"transformers",
"pytorch",
"git",
"image-text-to-text",
"generated_from_trainer",
"base_model:microsoft/git-base",
"base_model:finetune:microsoft/git-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-09-17T04:53:53Z |
---
license: mit
base_model: microsoft/git-base
tags:
- generated_from_trainer
model-index:
- name: git-base-next-temp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# git-base-next-temp
This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Alex14005/model-Dementia-classification-Alejandro-Arroyo
|
Alex14005
| 2023-09-17T04:47:44Z | 197 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-17T02:28:13Z |
---
license: apache-2.0
base_model: microsoft/resnet-50
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
widget:
- src: https://huggingface.co/Alex14005/model-Dementia-classification-Alejandro-Arroyo/raw/main/Mild-demented.jpg
example_title: Mild Demented
- src: https://huggingface.co/Alex14005/model-Dementia-classification-Alejandro-Arroyo/raw/main/No-demented.jpg
example_title: Healthy
model-index:
- name: model-Dementia-classification-Alejandro-Arroyo
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: RiniPL/Dementia_Dataset
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9230769230769231
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model-Dementia-classification-Alejandro-Arroyo
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the RiniPL/Dementia_Dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1858
- Accuracy: 0.9231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
SiberiaSoft/SiberianPersonaFredLarge-2
|
SiberiaSoft
| 2023-09-17T04:45:23Z | 142 | 2 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ru",
"dataset:SiberiaSoft/SiberianPersonaChat-2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-17T04:41:25Z |
---
license: mit
datasets:
- SiberiaSoft/SiberianPersonaChat-2
language:
- ru
pipeline_tag: text2text-generation
widget:
- text: '<SC6>Я парень, консультант по разным вопросам. Я очень умный. Я люблю помогать собеседнику. Недавно, у меня был следующий диалог:\nТы: Почему трава зеленая?\nЯ: <extra_id_0>'
- text: '<SC6>Я очень умная девушка, и хочу помочь своему другу полезными советами. Недавно, у меня был следующий диалог:\nТы: Ты знаешь, я недавно посетил природный парк, и это было просто невероятно!\nЯ: Настоящая красота природных парков и заповедников никогда не перестанет меня поражать.\nТы: Согласен, я был ошеломлен разнообразием животных и растительности.\nЯ: <extra_id_0>'
- text: '<SC6>Вопрос: Как вывести воду из организма для похудения быстро?\nОтвет: <extra_id_0>'
---
### SiberiaSoft/SiberianPersonaFred
Данная модель предназначена для имитации личности в диалоге. Подробнее [тут](https://huggingface.co/datasets/SiberiaSoft/SiberianPersonaChat-2).
Модель основана на [FRED-T5-LARGE](https://huggingface.co/ai-forever/FRED-T5-large)
## Формат описаний личности
1. Я очень умная девушка, и хочу помочь своему другу полезными советами.
2. Я парень, консультант по разным вопросам. Я очень умный. Люблю помогать собеседнику.
Также в промпт можно подставлять факты о личности: ФИО, возраст и т.д
1. Я девушка 18 лет. Я учусь в институте. Живу с родителями. У меня есть кот. Я ищу парня для семьи.
Статья на habr: [ссылка](https://habr.com/ru/articles/751580/)
### Пример кода инференса
```python
import torch
import transformers
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
t5_tokenizer = transformers.GPT2Tokenizer.from_pretrained("SiberiaSoft/SiberianPersonaFred-2")
t5_model = transformers.T5ForConditionalGeneration.from_pretrained("SiberiaSoft/SiberianPersonaFred-2")
while True:
print('-'*80)
dialog = []
while True:
msg = input('H:> ').strip()
if len(msg) == 0:
break
msg = msg[0].upper() + msg[1:]
dialog.append('Ты: ' + msg)
# В начале ставится промпт персонажа.
prompt = '<SC6>Я парень, консультант по разным вопросам. Я очень умный. Я люблю помогать собеседнику. Недавно, у меня был следующий диалог:' + '\n'.join(dialog) + '\nЯ: <extra_id_0>'
input_ids = t5_tokenizer(prompt, return_tensors='pt').input_ids
out_ids = t5_model.generate(input_ids=input_ids.to(device), do_sample=True, temperature=0.9, max_new_tokens=512, top_p=0.85,
top_k=2, repetition_penalty=1.2)
t5_output = t5_tokenizer.decode(out_ids[0][1:])
if '</s>' in t5_output:
t5_output = t5_output[:t5_output.find('</s>')].strip()
t5_output = t5_output.replace('<extra_id_0>', '').strip()
t5_output = t5_output.split('Собеседник')[0].strip()
print('B:> {}'.format(t5_output))
dialog.append('Я: ' + t5_output)
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.