The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: TypeError
Message: Couldn't cast array of type struct<id: int64, title: string, bounding: list<item: double>, color: string, font_size: int64, flags: struct<>> to null
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 295, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2321, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2255, in cast_table_to_schema
cast_array_to_feature(
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1804, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2061, in cast_array_to_feature
casted_array_values = _c(array.values, feature.feature)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1806, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2095, in cast_array_to_feature
return array_cast(
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1806, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1959, in array_cast
raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}")
TypeError: Couldn't cast array of type struct<id: int64, title: string, bounding: list<item: double>, color: string, font_size: int64, flags: struct<>> to nullNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
license: creativeml-openrail-m
WAN 2.2 – I2V Workflow (Optimized for 12GB GPUs)
A fast, clean, and VRAM-efficient Image-to-Video workflow built around WAN 2.2. Fast render times on mid-range GPUs. I tried to keep this simple and easy to use, while maintaining good results. Utilizing well known nodes, and minimizing node bloat. The workflow also has comments everywhere and clear flow.
Ver 1.0 - Base workflow, can do 5 second clips in one iteration. (very fast for 12gb)
Ver 1.1 - More stability, can run 100 times consecutively in 8hrs
Ver 1.2 - Renders 20 second videos. Cleanup of wires.
Ver 1.3 - MMAudio added.
Ver 1.4 - 2x Upscaling, color correction, & sharpening in between passes for quality consistency.
QWEN Image Edit workflow (Optimized for 12GB GPUs)
Designed to run large AIO QWEN checkpoints (≈28GB) while still generating high-resolution outputs on 12GB VRAM GPUs.
The focus here is:
Image editing / guided edits
Very low step counts
Stable results at low CFG
Aggressive memory management
Clean upscale + post polish
Z-Image Turbo workflow (Optimized multi-phase)
Designed to extract maximum detail, edge fidelity, and material realism on 12GB VRAM GPUs. This workflow also includes seed variance to the conditioning so that outputs with the same prompt have more variety similarly to SDXL, Pony, IL models.
This workflow uses controlled sigma shaping, Res-2 samplers, and phased refinement passes to stabilize detail while avoiding common ZIT artifacts like:
Over-etched hair
Shimmering edges
Checkerboard blockiness
CFG-induced harshness
The result is clean, high-contrast outputs that scale well across portraits, fashion, cinematic scenes, and hard-surface material tests.
Auto IMG Batch Caption workflow automatically generates clean, structured image captions by combining WD14 tagging, Florence-style natural language descriptions, and a custom trigger token for training consistency. The idea behind this workflow is to deliver proven results for easily (one-click) captioning datasets for training. I have made many high quality LORA from the datasets this workflow outputs.
Uses WD14 to extract high-quality tag metadata
Uses Florence to generate a natural-language image description
Injects a custom trigger token at the start of every caption
Outputs both tags + descriptive text in a single caption block
Saves captions to a user-defined folder inside ComfyUI/output
Important Setup Note (VERY IMPORTANT)
You must create a folder inside:
ComfyUI/input/
Example:
ComfyUI/input/Captions
Then select that folder in the caption loader node.
Captions follow this format:
TRIGGER, wd14_tags_here, florence_generated_description_here
- Downloads last month
- 32