File size: 6,760 Bytes
6065fb4 15fe85c 06a4394 15fe85c 06a4394 b373ef0 b9a59e3 6c8fa73 b373ef0 67db0d4 b373ef0 64dbf64 b373ef0 86a9a99 b373ef0 f9e516a 33a6944 b373ef0 b9a59e3 b373ef0 b9a59e3 b373ef0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 |
---
license: apache-2.0
library_name: peft
pipeline_tag: text-generation
inference: false
tags:
- finetuned
- prompt generator
- stable-diffusion
- mistralai/Mistral-7B-v0.1
- 'inference: false'
---
<br>
<h1 style="font-size: 2em;">Prompt Diffusion ๐</h1>
|<img src="coffee.png" width="550"/>|<img src="skii.png" width="550"/>|<img src="dance_rain.png" width="550"/>|<img src="man_beach.png" width="550"/>|
|---------------------------------------|----------------------------------|-----------------------------------------|--------------------------------------|
This model was build via parameter-efficient QLoRA finetuning of [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on a custom dataset.
The goal of this project is to build a model which takes simple and short natural language inputs and generate a highly detailed and descriptive prompt for [Stable Diffusion](https://stability.ai/stable-diffusion) and other similar image generation models.
Hence making it easier for everyone to get high quality images easily.
## Note
>This is the first iteration of this project and it was to check the capabilities of Mistral-7B model.
>As of now the model is fine-tuned mostly on the prompts which includes humans, with description of the subject and the scene.
>This is a work in progress and this is just a prototype. Hence, one can expect issues with response from the model.
## Input Template
```
###Human:
generate a stable diffusion prompt for {input}
###Assistant:
```
## Output Template
```
###Human:
generate a stable diffusion prompt for a model in a cafe sipping coffee
###Assistant:
A model in a cafe sipping coffee, with a warm glow illuminating her face. She wears a stylish dress and has a book in her hand. The background is a cozy cafe with dim lights.
```
## Example prompts and responses
Example 1:
**User**:
>a model in a cafe sipping coffee
**Prompt Diffusion**
>A model in a cafe sipping coffee, with a warm glow illuminating her face.
She wears a stylish dress and has a book in her hand. The background is a cozy cafe with dim lights.
<br>
Example 2:
**User**:
>a man skiing
**Prompt Diffusion**
>A man in a ski suit, with a helmet and goggles, is skiing down a snowy mountain, with the wind blowing in his face and the sun shining brightly.
The man is enjoying the thrill of skiing, as he glides down the slopes, leaving behind a trail of fresh powder.
<br>
Example 3:
**User**:
>a beautiful woman dancing in rain
**Prompt Diffusion**
>A beautiful woman, dressed in a flowing white gown, dances gracefully in the rain.
The raindrops fall gently around her, creating a magical and ethereal atmosphere.
She moves with grace and elegance, her long hair flowing in the wind.
The rain creates a shimmering effect on her gown, making her look like a goddess.
<br>
Example 4:
**User**:
>a man on a beach
**Prompt Diffusion**
>A man sits on a beach, the waves crashing against the shore.
The sun is setting, casting a warm glow over the sand and water.
The man is lost in thought, his mind wandering as he takes in the beauty of the scene.
<br>
## Demo
>This code can be run even on the free version of Google Colab. Change the runtime to **GPU - T4** and run the notebook below:
## <a href="https://colab.research.google.com/drive/1NfNA6sYWUG0Nqy42lhyJfUIqDxsrV64d?usp=sharing" target="_blank" style="color: orange; font-weight: bold;">Try on Colab</a> ๐
## Basic usage
```python
!pip install git+https://github.com/huggingface/transformers
!pip install git+https://github.com/huggingface/peft.git
!pip install torch
!pip install -q bitsandbytes accelerate
```
```python
#Importing libraries
from peft import PeftConfig, PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch
import re
```
```python
#Loading adapter model and merging it with base model for inferencing
torch.set_default_device('cuda')
peft_model_id = "abhishek7/Prompt_diffusion-v0.1"
config = PeftConfig.from_pretrained(peft_model_id)
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
low_cpu_mem_usage=True,
load_in_4bit=True,
quantization_config=bnb_config,
torch_dtype=torch.float16,
device_map="auto"
)
model = PeftModel.from_pretrained(model, peft_model_id)
model = model.merge_and_unload()
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path, trust_remote_code=True)
tokenizer.padding_side = "right"
```
```python
# Function to truncate text based on punctuation count
def truncate_text(text, max_punctuation):
punctuation_count = 0
truncated_text = ""
for char in text:
truncated_text += char
if char in [',', '.']:
punctuation_count += 1
if punctuation_count >= max_punctuation:
break
# Replace the last comma with a full stop if the last punctuation is a comma
if truncated_text.rstrip()[-1] == ',':
truncated_text = truncated_text.rstrip()[:-1] + '.'
return truncated_text
# Function to generate prompt
def generate_prompt(input, max_length, temperature):
input_context = f'''
###Human:
generate a stable diffusion prompt for {input}
###Assistant:
'''
inputs = tokenizer.encode(input_context, return_tensors="pt")
outputs = model.generate(inputs, max_length=max_length, temperature=temperature, num_return_sequences=1)
output_text = tokenizer.decode(outputs[0], skip_special_tokens = True)
# Extract the Assistant's response using regex
match = re.search(r'###Assistant:(.*?)(###Human:|$)', output_text, re.DOTALL)
if match:
assistant_response = match.group(1)
else:
raise ValueError("No Assistant response found")
# Truncate the Assistant's response based on the criteria
truncated_response = truncate_text(assistant_response, max_punctuation=10)
return truncated_response
```
```python
# Usage:
input_text = "a beautiful woman dancing in rain"
prompt = generate_prompt(input_text, 150, 0.3)
print("\nPrompt: " + prompt)
```
## Contributing
Contributions are welcome! If you find any bugs, create an issue or submit a pull request with your proposed changes.
## Acknowledgements
This model was finetuned by [Abhishek Kalra](https://github.com/abhishek7kalra) on Sep 29, 2023 and is for research applications only.
[](mailto:abhishek7.kalra@gmail.com)
## mistralai/Mistral-7B-v0.1 citation
```
coming
```
## Framework versions
- PEFT 0.6.0.dev0 |