Prompt Examples NSFW / Skin / Eyes / Hands

#40
by DavidStorm - opened

Hi, I love this model but have problems to output nice looking nsfw images escpecially the skin, eyes and hands.
I think I am doing something wrong.
Can somebody give a good prompt / Sampler examples?

Currently I am using

  • res_multistep Sampler
  • scheduler beta
  • 30 steps

For negative prompt:
blurry, bad hands, cartoon, anime, drawing, painting, 3d,okeh,text, logo, watermark

For now I am refining them with another checkpoint to fix the skin and other parts. I think my setup can be optimized.

Thanks in advance!

hello Davidstorm,

What means "okeh" in your negative prompt ? did you mean "bokeh" ?

i use this negative : bad hands, worst quality, low quality, bad quality, unfinished, out of focus, out of frame, deformed, disfigure, blurry, smudged, restricted palette, cartoon, anime, drawing, painting, 3d, flat colors,text, logo, watermark, closed eyes, cap, hat, blood, [and sometimes eyeglasses when i don't want]

there is also all the extra legs/Arm/fingers or missing legs/arm/finger etc... that could help, also may "bad body" in negative, bad mouth and bad teeth in some cases etc...

i still get drawing sometimes so maybe i will add weights like (drawing:1.2)

im testin this model on chroma v32 version and i noticed something weird. I made a prompt like 2 sentences and the prompt worked well and i get different faces with random seed and cfg:7 but at some point generation started to show same face.

So i made a shorter prompt and i get back to various faces but again i get same faces especially the woman faces are the same reappearing again and again.

i made the most simple prompt like " a woman and a man" and still get same faces and after some generation i get different faces the going back to same face again. I get more variation of man face though

i tried to generate without negative and still same issues.

i managed to get more face variety with adding age in my positive prompt.

i don't understand what's going on. Maybe its the action described in the prompt that make appear same faces if its not understood by the model while others actions will work nicely and will generate random face.

Thanks for the hints. I never had the same faces to be honest.
"okeh" was a copy + paste issue. It should be "bokeh"

But I am not sure if this will solve my issue. But I will try, thanks!

Thank you @iafun for sharing.

My quick 2 cents:

  • I use the following neg prompt: illustration, anime, drawing, artwork, bad hands. artwork seems to be quite efficient
    • I prefer not too have long neg, as my positive is usually (always?) very long already and I get worse results w/ long neg.
  • I may get the same face when it generates drawings / illustrations, otherwise it's quite varied.
  • to get photo (which I'm biased towards), a few tricks I use:
    • I never use tags, as I find it far more difficult to get rid of illustrations. But it's possible (see camera lens and lighting below).
    • professional photo (or photograph ofc), or amateur if you want, but not just photo
    • adding the camera lens (ex. Bolex H16, Leica M10, Lumix GH5, ...) may work sometimes, but not always. But I don't have a precise list for Flux unfortunately, only some for SD...
    • cinematic may bring more illustrations, but dynamic works well (and pretty good in POV NSFW :)). As in This photo is dynamic near the end.
    • adding some lighting seems to enforce photo-rendering
    • finally, adding some atmosphere also seems to work fine, e.g. The atmosphere should be inviting and cozy, with dimmed warm lighting
      • I recently came up to this page, it has helped a bit, in particular w/ prompt organization and atmosphere

@DavidStorm : I now stay with euler / beta mostly, as photo-rendering has clearly improved over time. You can try euler_a as sampler, and/or ddim_u as scheduler, usually more photo but they can get a bit wild (and a too amateur / raw look) sometimes. And v33 seems to go in the right direction, as per my very early tests...

Hope this helps!

Oh, and I forgot @iafun : CFG 7? I use CFG 3.0-4.0, sometimes up to 5.0 if my prompt and seed get photo enough.

thank you bp0, i will test your settings to see how it goes and might return a feedback here

tahnk you bp0, i will test your settings to see how it goes and might return a feedback here

Would love to get your feedback! I'm no expert, but I believe in sharing our experiences and mutual improvement.
Maybe once all this get properly tested / validated, I'll take the time to write an article on civitai or here, I believe this great model deserves a large audience.
In particular maybe for POV, which works pretty impressively but it took me time...

so after many test i think both high cfg and long negative was the issue about getting same face and bad output generation

i lowered to 4 but find my good point at 4.5.

i use your negative above and artwork work fine i dont get drawing anymore. I had some manga output, but i never had one since i used artwork. I used manga as negative before.

one thing that could be test, its that i get sometimes people quite amorphous, apathetic disregarding the action we put in the positive prompt and i wonder if the word "distracted" in the negative could help as i saw this word in the negative of a lot of people who use illustrious along with the dynamic word in the positive prompt. this has to be tried as we can get more vividly people and scenery.

i tried also some prompt with describing shoes of people in order to get full body and i was pretty amazed at the result for old people like the skin and body are more realistic than what i got with basic flux model but get often horror body probably due to bad resolution choice. I think its better to stay with few negative but i wonder if negative prompt like missing lgs/arms/fingers... or mutated fingers/arms,legs or bad legs/Arms/fingers could help.

for generation like portrait/upper body i put only "hands" in the negative instead of "bad hands" when i generate portrait so i got no hands so i don't have issues with hands

so after many test i think both high cfg and long negative was the issue about getting same face and bad output generation
i use your negative above and artwork work fine i dont get drawing anymore

Really glad to read this!

i get sometimes people quite amorphous, apathetic disregarding the action we put in the positive prompt

Quite surprising I must say, as I find Chroma to express emotions quite strikingly. For example, if I describe that the character is laughing and have a joyful look, I get pretty good result I must say.
Maybe try to add This photo is dynamic near the end, this could help.
Generally speaking, I never use neg like distracted or so. Only use positive prompt to render emotions and so.

but get often horror body probably due to bad resolution choice

Yes, more certainly the resolution (or more precisely, the aspect ratio) is key here. And I found it utterly important when doing POV, but not only. I do indeed get extra heads / body parts when the aspect ratio is not adapted to what I describe, or sometimes when the resolution is too high (but less frequently).
Instead of describing the shoes, maybe try to start with Full-body professional photo or so?
Also, describing the background may help the model to "fill the blank" if you character only occupies part of the image.
And once again, I never had to use neg to avoid deformities. And I will probably get rid of the bad hands I used to add, as honestly it probably didn't have much impact (and v33 brought real improvement in this regard).

If you want, just drop an example prompt and seed (+sampler / scheduler / resolution) here, and I'll try on my side.

Hope this helps!

thank you for your feed back bp0 !

What do you think of the power of opposite tag like you put "look at the camera" in the positive and "closed eyes" in the negative so you ll be sure to get people with open eyes or for the example you tell : "the character is laughing and have a joyful look" putting "sad" in the negative can reinforce the contrast between postive and negative prompt and get better result ? or do you think the positive prompt will be engouh to get whant you want without weighting the negative with a risk of jeopardizing the output ? it seems you are a positive prompt believer to me and minimalist prompter at least with this model.

thank you for your feed back bp0 !

You're very much welcome my friend!

What do you think of the power of opposite tag like you put "look at the camera" in the positive and "closed eyes" in the negative so you ll be sure to get people with open eyes or for the example you tell : "the character is laughing and have a joyful look" putting "sad" in the negative can reinforce the contrast between postive and negative prompt and get better result ? or do you think the positive prompt will be engouh to get whant you want without weighting the negative with a risk of jeopardizing the output ?

I would definitely recommend only positive prompt as you can imagine :)
Something like He smiles confidently and looks at viewer, his big eyes are opened should work. But it's not so simple, I did a test for you (and it also addresses the full-body shot issue):

Let's take the following:
euler / beta / 40 (or less) - seed 580438709200704 - res 896x1152
positive (that's a short one, only 67 words! \o/ ):

Professional photo of a 35 year old man sitting on a bench.
He looks like a geek, with a colorful t-shirt depicting a cute penguin dressed like a ladybug.
He smiles confidently and looks at viewer, his big eyes are opened and inspire trust.
The atmosphere should be joyful, with warm natural lighting.
The scene takes place in a park in Japan, during a nice spring day.

neg: illustration, anime, drawing, artwork, bad hands

=> that gives a pretty good result. If I replace He smiles confidently with He laughs heartily, his eyes won't be opened but it's quite normal, he is laughing.

Now let's try to get him in full body. Just replace with Full-body professional photo in line 1.
That won't work...

To solve it:
Solution 1: raise to 896x1488. This will work with this seed, but with others we may have to add This photo captures the man from head to feet. right after the atmosphere line. And we loose a bit of warmth in the atmosphere I think.

Solution 2: keep 896x1152, but replace line 2 with He looks like a geek, he wears sneakers and a colorful t-shirt depicting a cute penguin dressed like a ladybug.. Works, but he has changed his pose.

Solution 3: keep 896x1152, but you'll have to remove stuff:

  • remove reference to his eyes, just keep He smiles confidently
  • remove the atmosphere line, and replace it with This photo captures the man from head to feet.

My understanding (but to be confirmed): at 896x1152, there is basically not "enough pixels" to properly depict his eyes and the atmosphere. Maybe there's a way to force the full-body even more, but I don't know how tbh.

It's just an example, but I hope this will help you get the results you expect in your images.

Examples (safe version).
This is just one of the prompts I used.

Positive Prompt
medium-wide-shot, professional color photo, A chubby mature man sitting in speedos. He is looking at the viewer. He has a short beard, extreme hairy chest, extreme hairy arms, Detailed eyes, soft cinematic lighting, warm tones, soft bokeh, This photo was captured with Bolex H16,This photo is dynamic

Negative
monochrome, illustration, anime, drawing, artwork, bad hands,bad hands, worst quality, low quality, bad quality, unfinished, out of focus, out of frame, deformed, disfigure, blurry, smudged, restricted palette, cartoon, painting,text, logo, watermark

Chroma 33
Steps: 30
Sample Name: ddim (tested also other ones)
Scheduler: ddim uniform ddim (tested also other ones)

20250602_110407_Chroma__00001_.png
20250602_110534_Chroma__00001_.png

@DavidStorm ,

Not bad at all! While I sometimes use ddim_uniform, I will reconsider using ddim as a sampler.

I did some quick tests on my side, all with euler / beta / 40 steps / 1024x1024, and illustration, anime, drawing, artwork as neg prompt.
Note: my workflow is derived from the standard one, except for the seed node from rgthree and the Input: Multilinenode which comes from the extension I developed. You can remove the latter, just copy the prompt and paste it into the positive CLIP Text Encode as usual. Just remove the lines starting with #, as my node automatically strips them (useful for testing prompts). If you want to keep it, it's called ComfyLab-Pack and available in the repository (and I think my XY Plot is not bad at all... :))

Prompt 1:

Professional photo of a chubby mature man in speedos sitting on a stool.
He holds a glass of beer, looks at viewer and friendly cheers.
He has a short beard, extremely hairy chest and arms, fair skin, tanned arms and face.
He is wet and water trickles on his body.
The atmosphere should be inviting and friendly.
The warm summer light reflects over his body and the glass, and casts soft shadows.
The scene takes place in a garden, with a swimming pool in background.

prompt 1.png

What to say:

  • I added a bit of action and details
  • I always use a short neg prompt, as I'm actually getting rid of bad hands nowadays.
  • I do not use the camera angle, bc as I explained in my comment above I find it quite hard (depends on res, ...).
  • I used 1024x1024 because we often loose the lighting with too high resolutions. But that is to be checked on a case-by-case.
  • I tend to not use camera lenses so much anymore, as I find that describing the atmosphere brings a lot. And it's great in NSFW (use erotic, vivid or so), adding plenty of nice details...
  • In your case and in this prompt, I don't think dynamic is useful if there is no motion. And it may make the image a bit less photo sometimes. But here too, quite interesting with NSFW if there is some actions.
  • I discovered that lighting is very important for photo rendering: that's why here I decided to have the man wet, to emphasize this.
  • TL;DR: mood / atmosphere and lighting will change your image quite dramatically, and improve photo-realism. Basically you explain what you expect and let the model surprise you (usually in a good way): it can save a lot of prompt.

Prompt 2 - a bit of dynamism:
Let's wait a bit, and check how our friend is doing a few moments later...

a few moments later.jpg

Professional photo of a chubby mature man in speedos.
He is drunk and happy, holding 2 glasses of beer, dancing so frantically that beer is thrown out and splashes on his face.
He has a short beard, extremely hairy chest and arms, fair skin, tanned arms and face.
He is wet and water trickles on his body.
The atmosphere should be joyful and vivid.
#This photo is dynamic and captured with Lumix GH5.
#This photo is dynamic.
The warm summer light reflects over his body, and casts soft shadows.
The scene takes place in a garden, with a swimming pool in background.

prompt 2 - vivid.png

  • Here I wanted to express dynamic. TL;DR: we do not need This photo is dynamic for this photo, the effect is pretty good already.
  • Adding This photo is dynamic, in this specific case, doesn't bring much.
  • Adding This photo is dynamic and captured with Lumix GH5, changes things a bit, as the Lumix GH5 lens is said to be very dynamic. But I had mixed results with it, sometimes loosing a bit of photo rendering, but sometimes great colors.
    • other camera lenses you can try in this page. But it was for SD, and I don't actually know if there really work with Flux and Chroma...

Hope this helps, cheers!

Thanks for your input!
I actually dont like the output when I look at the body hair for example. There is some kind of pattern which I see often in flux content.
Currently I am generating base images with Chroma and then refining them in SDXL to output cleaner images / better skin.
But I want to skip that step.

Thanks for your input!
I actually dont like the output when I look at the body hair for example. There is some kind of pattern which I see often in flux content.
Currently I am generating base images with Chroma and then refining them in SDXL to output cleaner images / better skin.
But I want to skip that step.

I agree for the body hair, and actually with some seeds I tested it was not great at all. In my first pic it's not too bad on the chest, but not on arms.
You can maybe try to lower the CFG a bit, increase resolution to 1152x1152, or even switch to euler_a which can be interesting in some cases. Warning: euler_a + ddim_u can be pretty wild, but okay with beta or so.

And there are additional reasons to hope:

  • the photo-rendering is getting better after each version, with finer details (head hair, ...)
  • as @lodestones explained in another thread, training is currently at low res, and higher res is reserved for the end (v48-50). But maybe it has already started, as a large task has appeared a few days ago in the Live AIM Training Logs (but I could be wrong on this, just guessing).

what could be interesting to see it's to post the picture of davidstorm generation with same positive prompt but a shorter negative prompt like bpO one : illustration, anime, drawing, artwork and compare with his first pictures with the longer negative prompt. thus, we can see the effect of shorter negative prompt on a pic with this model or as a general rule.

to me the difference between the 2 images are quite big. but there is the other settings that could alter the output.

Anyway, i join the lodestone discord community linked on the model page and there is a lot of interesting subchan with tons of info on prompting. There is also an interesting chan about creating lora. Maybe if you join this discord we could reach each other and stay in contact.

Excellent suggestions @iafun !

what could be interesting to see it's to post the picture of davidstorm generation with same positive prompt but a shorter negative prompt like bpO one : illustration, anime, drawing, artwork and compare with his first pictures with the longer negative prompt. thus, we can see the effect of shorter negative prompt on a pic with this model or as a general rule.

In fact, I used different seeds, as I was using a different res / sampler / scheduler anyway.
But doing a side-by-side comparison, by just changing the neg prompt (or inversely just the positive) would be very interesting indeed.

Anyway, i join the lodestone discord community linked on the model page and there is a lot of interesting subchan with tons of info on prompting. There is also an interesting chan about creating lora. Maybe if you join this discord we could reach each other and stay in contact.

I haven't joined yet, and will definitely ping you when I do (very soon I believe).

hello bpO,

you will see on the discord they talk a lot about the caption tools and AI chat to enhance their prompt.

there is many schools it seems. i uses joy caption 2 and switch between gemini/qwen/yasa chatbot to help me sometimes to rework some prompt sentence in order to catch an effect.

i am not happy with joy caption 2 that but the tool they use is often model hosted on hugging face without space available.

i like more qwen3 than gemini for rewriting prompt sentence, he gives such great flux prompt and very inspiring. You adapt then your prompt to your taste but gemini/qwen are censored while yasa not.

it will be interesting to hear which tool you use for caption/writing prompt and why ?

Hi again @iafun ,

i am not happy with joy caption 2 that but the tool they use is often model hosted on hugging face without space available.

I agree, was not really satisfied with JoyCaption neither, but JoyCaption 2 beta 1 is said to improve that a lot: more in this article on Civitai
taggui integrates JoyCaption, And it runs locally (and it has tons of other models shipped). I just checked and the v2 beta 1 is now integrated: link to github repo
Otherwise, a guy actually implemented an interface for JoyCaption to run locally. I knew that for v2 alpha 2, but I checked and he had implemented it for the v2 beta 1: link to github repo. The former version was quite simple, but it did the job.

i like more qwen3 than gemini for rewriting prompt sentence, he gives such great flux prompt and very inspiring. You adapt then your prompt to your taste but gemini/qwen are censored while yasa not.
it will be interesting to hear which tool you use for caption/writing prompt and why ?

In fact, I never used a tool to caption, except taggui to draft the prompts when I did some tests of LoRA in SDXL, but I refined them afterwards anyway. And I found that prompting for LoRA may be a bit different, but maybe that was not right.
Thanks for all this info, I also never used online tools as I read that they are - almost all - censured for NSFW. But I will give yasa a try probably, thanks!

With a fixed seed:

Positive Prompt:
medium-wide-shot, professional color photo, A chubby mature man sitting in speedos. He is looking at the viewer. He has a short beard, extreme hairy chest, extreme hairy arms, Detailed eyes, soft cinematic lighting, warm tones, soft bokeh, This photo was captured with Bolex H16,This photo is dynamic

Negative Prompt left
monochrome, illustration, anime, drawing, artwork, bad hands,bad hands, worst quality, low quality, bad quality, unfinished, out of focus, out of frame, deformed, disfigure, blurry, smudged, restricted palette, cartoon, painting,text, logo, watermark

Negative Prompt right
illustration, anime, drawing, artwork

Chromas.png

Shorter negative prompt does look a little better in my opinion.

Thank you @DavidStorm !
Indeed, I agree with you.

We talked about details and body hair in particular: in this regard, if you haven't done yet, I recommend you check the v34-detail-calibrated version.
It's quite different from v34 as per the tests I have started, but very interesting for fine details, lighting and atmosphere.

great example ! now we know !

I added Chroma 34 for comparison.
There are some changes. Not sure if I like it. I will run more tests.
The beard and face looks a little better I think. The contrast seems higher.
The fingers are looking a little too long.

chroma34.png

I agree @DavidStorm ,
Slight improvement on face (and how do you call that? Head hair?). But not on body hair and the hand. I also noticed some issues with hands in my tests. But I find lighting a tiny bit better on v34.
I imagine it was with ddim / ddim_u at cfg 4.5? Could you kindly give a shot at euler / beta / cfg 4? Or just provide me the seed and I'll try on my side.
Anyway, a very interesting test, thank you for sharing my friend!

Changed it to CFG4, Euler + Beta

noise seed:
435435434

Dont like it to be honest.

screen.png

20250603_230834_Chroma__00001_.png

Sign up or log in to comment