The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
algo_version: string
arch_preset: string
base_model_family: string
lora_content_hashes: list<item: string>
child 0, item: string
score: double
config: struct<merge_mode: string, sparsification: string, sparsification_density: double, dare_dampening: d (... 104 chars omitted)
child 0, merge_mode: string
child 1, sparsification: string
child 2, sparsification_density: double
child 3, dare_dampening: double
child 4, merge_refinement: string
child 5, auto_strength: string
child 6, optimization_mode: string
child 7, strategy_set: string
candidates: list<item: struct<rank: int64, config: struct<merge_mode: string, sparsification: string, sparsifica (... 12495 chars omitted)
child 0, item: struct<rank: int64, config: struct<merge_mode: string, sparsification: string, sparsification_densit (... 12483 chars omitted)
child 0, rank: int64
child 1, config: struct<merge_mode: string, sparsification: string, sparsification_density: double, dare_dampening: d (... 104 chars omitted)
child 0, merge_mode: string
child 1, sparsification: string
child 2, sparsification_density: double
child 3, dare_dampening: double
child 4, merge_refinement: string
child 5, auto_strength: string
child 6, optimization_mode: string
child 7, strategy_set: string
child 2, score_heuristic: double
child 3, score_measured: double
child 4, score_final: double
child 5, per_prefix_decisi
...
attention.to_out.0: string
child 219, diffusion_model.layers.7.attention.to_q: string
child 220, diffusion_model.layers.7.attention.to_v: string
child 221, diffusion_model.layers.7.feed_forward.w1: string
child 222, diffusion_model.layers.7.feed_forward.w2: string
child 223, diffusion_model.layers.7.feed_forward.w3: string
child 224, diffusion_model.layers.8.adaLN_modulation.0: string
child 225, diffusion_model.layers.8.attention.to_k: string
child 226, diffusion_model.layers.8.attention.to_out.0: string
child 227, diffusion_model.layers.8.attention.to_q: string
child 228, diffusion_model.layers.8.attention.to_v: string
child 229, diffusion_model.layers.8.feed_forward.w1: string
child 230, diffusion_model.layers.8.feed_forward.w2: string
child 231, diffusion_model.layers.8.feed_forward.w3: string
child 232, diffusion_model.layers.9.adaLN_modulation.0: string
child 233, diffusion_model.layers.9.attention.to_k: string
child 234, diffusion_model.layers.9.attention.to_out.0: string
child 235, diffusion_model.layers.9.attention.to_q: string
child 236, diffusion_model.layers.9.attention.to_v: string
child 237, diffusion_model.layers.9.feed_forward.w1: string
child 238, diffusion_model.layers.9.feed_forward.w2: string
child 239, diffusion_model.layers.9.feed_forward.w3: string
to
{'algo_version': Value('string'), 'arch_preset': Value('string'), 'lora_content_hashes': List(Value('string')), 'score': Value('float64'), 'config': {'merge_mode': Value('string'), 'sparsification': Value('string'), 'sparsification_density': Value('float64'), 'dare_dampening': Value('float64'), 'merge_refinement': Value('string'), 'auto_strength': Value('string'), 'optimization_mode': Value('string'), 'strategy_set': Value('string')}, 'candidates': List({'rank': Value('int64'), 'config': {'merge_mode': Value('string'), 'sparsification': Value('string'), 'sparsification_density': Value('float64'), 'dare_dampening': Value('float64'), 'merge_refinement': Value('string'), 'auto_strength': Value('string'), 'optimization_mode': Value('string'), 'strategy_set': Value('string')}, 'score_heuristic': Value('float64'), 'score_measured': Value('float64'), 'score_final': Value('float64')})}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 295, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2281, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2227, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
algo_version: string
arch_preset: string
base_model_family: string
lora_content_hashes: list<item: string>
child 0, item: string
score: double
config: struct<merge_mode: string, sparsification: string, sparsification_density: double, dare_dampening: d (... 104 chars omitted)
child 0, merge_mode: string
child 1, sparsification: string
child 2, sparsification_density: double
child 3, dare_dampening: double
child 4, merge_refinement: string
child 5, auto_strength: string
child 6, optimization_mode: string
child 7, strategy_set: string
candidates: list<item: struct<rank: int64, config: struct<merge_mode: string, sparsification: string, sparsifica (... 12495 chars omitted)
child 0, item: struct<rank: int64, config: struct<merge_mode: string, sparsification: string, sparsification_densit (... 12483 chars omitted)
child 0, rank: int64
child 1, config: struct<merge_mode: string, sparsification: string, sparsification_density: double, dare_dampening: d (... 104 chars omitted)
child 0, merge_mode: string
child 1, sparsification: string
child 2, sparsification_density: double
child 3, dare_dampening: double
child 4, merge_refinement: string
child 5, auto_strength: string
child 6, optimization_mode: string
child 7, strategy_set: string
child 2, score_heuristic: double
child 3, score_measured: double
child 4, score_final: double
child 5, per_prefix_decisi
...
attention.to_out.0: string
child 219, diffusion_model.layers.7.attention.to_q: string
child 220, diffusion_model.layers.7.attention.to_v: string
child 221, diffusion_model.layers.7.feed_forward.w1: string
child 222, diffusion_model.layers.7.feed_forward.w2: string
child 223, diffusion_model.layers.7.feed_forward.w3: string
child 224, diffusion_model.layers.8.adaLN_modulation.0: string
child 225, diffusion_model.layers.8.attention.to_k: string
child 226, diffusion_model.layers.8.attention.to_out.0: string
child 227, diffusion_model.layers.8.attention.to_q: string
child 228, diffusion_model.layers.8.attention.to_v: string
child 229, diffusion_model.layers.8.feed_forward.w1: string
child 230, diffusion_model.layers.8.feed_forward.w2: string
child 231, diffusion_model.layers.8.feed_forward.w3: string
child 232, diffusion_model.layers.9.adaLN_modulation.0: string
child 233, diffusion_model.layers.9.attention.to_k: string
child 234, diffusion_model.layers.9.attention.to_out.0: string
child 235, diffusion_model.layers.9.attention.to_q: string
child 236, diffusion_model.layers.9.attention.to_v: string
child 237, diffusion_model.layers.9.feed_forward.w1: string
child 238, diffusion_model.layers.9.feed_forward.w2: string
child 239, diffusion_model.layers.9.feed_forward.w3: string
to
{'algo_version': Value('string'), 'arch_preset': Value('string'), 'lora_content_hashes': List(Value('string')), 'score': Value('float64'), 'config': {'merge_mode': Value('string'), 'sparsification': Value('string'), 'sparsification_density': Value('float64'), 'dare_dampening': Value('float64'), 'merge_refinement': Value('string'), 'auto_strength': Value('string'), 'optimization_mode': Value('string'), 'strategy_set': Value('string')}, 'candidates': List({'rank': Value('int64'), 'config': {'merge_mode': Value('string'), 'sparsification': Value('string'), 'sparsification_density': Value('float64'), 'dare_dampening': Value('float64'), 'merge_refinement': Value('string'), 'auto_strength': Value('string'), 'optimization_mode': Value('string'), 'strategy_set': Value('string')}, 'score_heuristic': Value('float64'), 'score_measured': Value('float64'), 'score_final': Value('float64')})}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
LoRA Optimizer — Community Cache
Shared analysis results for the LoRA Optimizer ComfyUI node.
LoRA merge analysis is hardware-agnostic — the same LoRA files always produce the same conflict metrics and optimal merge config regardless of GPU tier. This dataset lets users share and reuse those results so nobody has to run the AutoTuner from scratch.
How It Works
The AutoTuner computes pairwise conflict metrics (cosine similarity, sign conflicts, subspace overlap) and tests merge parameter combinations to find the best config for a set of LoRAs. These results are keyed by content hash (SHA256[:16] of file contents) — not by filename — so they're portable across systems and private by design.
When community_cache=upload_and_download is set in the AutoTuner node:
- Download: Before running analysis, the node checks this dataset for existing results. A config hit skips the entire sweep (~30–120s saved). Lora/pair cache hits speed up the analysis phase even without a full config hit.
- Upload: After a successful sweep (or when replaying from local memory), results are uploaded if the local score beats the current community score for that LoRA set.
Privacy
LoRA filenames are never stored here. Only SHA256[:16] content hashes are used as keys. The uploaded data contains:
- Per-prefix conflict metrics (cosine similarity, sign conflict ratios, subspace overlap)
- Winning merge configuration (sparsification method, merge strategy, refinement level, etc.)
- A composite quality score
No file paths, no usernames, no LoRA names.
File Structure
lora/
{content_hash}.lora.json # Per-LoRA per-prefix conflict stats
pair/
{hash_a}_{hash_b}.pair.json # Pairwise conflict metrics (hashes sorted)
config/
{hash_a}_{hash_b}_..._{arch}.config.json # Best merge config + score for a LoRA set
All files include an algo_version field. Results from incompatible algorithm versions are ignored automatically.
Usage
In the LoRA AutoTuner node, set community_cache to upload_and_download. That's the only option — there's no passive download-only mode. If you benefit from the cache, you contribute back.
| Value | Behavior |
|---|---|
disabled (default) |
No network interaction |
upload_and_download |
Download precomputed results and contribute yours back |
Network errors are silently ignored — the node always falls back to local computation.
Setup
One time:
pip install huggingface_hub
huggingface-cli login
The node picks up your stored token automatically. No environment variables needed for most users.
Headless/server alternative: set HF_TOKEN as an environment variable.
Then: set community_cache=upload_and_download in the AutoTuner node and run as normal. Everything else is automatic.
Score-Based Replacement
Configs are only uploaded when your local score beats the community score. Users with more thorough sweeps (top_n=10) or better hardware naturally contribute higher-quality results over time.
- Downloads last month
- 1,605