File size: 4,097 Bytes
2719994 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
---
license: mit
language:
- en
base_model:
- black-forest-labs/FLUX.1-Kontext-dev
pipeline_tag: image-to-image
tags:
- lora
- diffusers
- flux
---
This is a batch-run banana model, used for practicing the kontext without mask outfit lora replacement effect.
All the example results were achieved by directly combining two images without using a mask.
Based on the test results, compared with the banana model, it has a greater advantage in terms of consistency.
The workflow for each image is similar, with only a slight adjustment of parameters. You can view the details by dragging the image into the comfyui.
This is the discussion on Reddit:
https://www.reddit.com/r/comfyui/comments/1nchoit/kontext_tryon_lora_no_need_for_a_mask_auto_change/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
------------------------------------------------------
这是批量跑的香蕉模型,用来练的kontext 无蒙版换装lora 换的效果。
所有示例结果都是不使用蒙版,直接传两张图完成的换装。
从测试结果看对比香蕉模型,一致性方面更有优势。
每张图的工作流都差不多,仅稍微微调了一点参数,具体可把图片拖入到comfyui中查看。
在Reddit上的讨论:
https://www.reddit.com/r/comfyui/comments/1nchoit/kontext_tryon_lora_no_need_for_a_mask_auto_change/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button





















 |