You are helping me restructure and fix my VLM spatial representation analysis experiments. Create all Python scripts and shell scripts under /data/shared/Qwen/experiments/ with the following folder structure:
/data/shared/Qwen/experiments/
βββ correct_filter/
β βββ correct_filter_analysis.py
β βββ run_molmo.sh
β βββ run_nvila.sh
β βββ run_qwen.sh
βββ swap_analysis/
βββ swap_analysis.py
βββ run_molmo.sh
βββ run_nvila.sh
βββ run_qwen.sh
CONTEXT
I'm studying how Vision-Language Models (VLMs) encode spatial concepts (left, right, above, under, far, close) in their hidden representations. I fine-tune models at different data scales (vanilla, 80k, 400k, 800k, 2m) and analyze how representations change.
The dataset is EmbSpatial-Bench (TSV at /data/shared/Qwen/EmbSpatial-Bench/EmbSpatial-Bench.tsv). Each row has: index, image (base64), question, answer, category (left/right/above/under/far/close), A/B/C/D options.
I use hooks on transformer layers to extract the last token's hidden state during the prefill pass (seq_len > 1 only). I then analyze cosine similarity between category-averaged representations.
EXISTING CODE TO REFERENCE
The following files contain the current (buggy) implementations. Use them as the foundation β keep all working parts (model extractors, data loading, visualization helpers) and apply the fixes listed below.
- correct_filter:
/data/shared/Qwen/experiments/exp2a_correct_filter/exp2a_correct_filter_analysis.py - swap_analysis:
/data/shared/Qwen/experiments/exp2a_swap_analysis/exp2a_swap_analysis.py - bbox analysis reference:
/data/shared/Qwen/experiments/analyze_counter_consistent.py
Read all of these files thoroughly before making any changes.
FIXES TO APPLY TO BOTH SCRIPTS
Fix 1: Add "Answer with only one word." to all prompts
Current prompts produce free-form sentences like "The table is below the picture." instead of "under". This causes near-zero accuracy for some categories because check_answer looks for exact spatial keywords.
For pairwise (left/right/above/under):
Current: "Is the {obj1} to the left or right of the {obj2}?"
Fixed: "Is the {obj1} to the left or right of the {obj2}? Answer with only one word."
Current: "Is the {obj1} above or under the {obj2}?"
Fixed: "Is the {obj1} above or under the {obj2}? Answer with only one word."
For distance (far/close):
Current: "Compared to {reference_object}, is {target_object} far or close from you?"
Fixed: "Compared to {reference_object}, is {target_object} far or close from you? Answer with only one word."
Apply this in ALL places where these prompts are constructed (both correct_filter and swap_analysis).
Fix 2: Expand answer matching to handle synonyms
Even with the prompt fix, some models may still produce synonyms. Update check_answer to handle:
- "below" β under
- "beneath" β under
- "near" β close
- "nearby" β close
- "distant" β far
FIXES SPECIFIC TO correct_filter_analysis.py
Fix 3: Improved correct vs all comparison β trajectory plots
Current code only compares correct-only vs all at a single representative layer, which can be misleading. Replace with trajectory plots across ALL layers:
Generate these overlay trajectory plots for each scale:
- correct + all: Two lines per pair (solid=correct, dashed=all)
- correct + incorrect: Two lines per pair (solid=correct, dashed=incorrect)
- correct + incorrect + all: Three lines per pair
Key pairs to plot:
- above-far (hypothesis)
- under-close (hypothesis)
- left-right (control)
- above-under (within axis)
- far-close (within axis)
Also generate cross-scale versions: for each pair, one panel per pair, lines colored by scale, separate figures for correct-only and all-samples.
Keep the existing ablation summary (accuracy vs similarity) but use the trajectory-based comparison instead of single-layer comparison.
FIXES SPECIFIC TO swap_analysis.py
Fix 4: Fix cross-group quads index matching
The create_cross_group_quads function fails because TSV index column values don't match HF dataset question_id values (type or format mismatch). All quads get no_bbox.
Fix: After loading the HF dataset, print sample keys from both sources to debug. Try matching by:
- Direct match (same type)
- String cast:
str(tsv_index) == str(question_id) - If format differs (e.g., TSV has int, HF has "question_XXXX"), build an explicit mapping
Add a validation log: "Matched X/Y indices between TSV and HF dataset" so we can verify it works.
Fix 5: Fix delta consistency metric
Current compute_delta_consistency computes within-GROUP pairwise cosine. This is wrong because opposite categories within a group (e.g., left and right) have opposite Ξ directions, causing well-separated concepts to show LOW consistency.
Replace with TWO metrics:
a) Within-category consistency: Compute pairwise cosine among Ξ vectors of the SAME category only (left Ξs with left Ξs, right Ξs with right Ξs). This measures whether same-concept swaps point in a consistent direction.
b) Sign-corrected group consistency: For each group, pick one category as "canonical" (e.g., left for horizontal). Multiply the opposite category's Ξ by -1 to align directions. Then compute pairwise cosine over the whole sign-corrected group. This measures whether the group has a consistent spatial axis.
Canonical categories: left (horizontal), above (vertical), far (distance).
Save both metrics. Generate plots for both (trajectory across layers, cross-scale comparison).
Fix 6: Add prediction stats visualization
Current code saves pred_stats_{scale}.json but generates no plot. Add:
- Bar chart: for each group, bars showing acc_orig, acc_swap, acc_both, colored by scale
- Cross-scale line plot: acc_both trajectory across scales, per group
Fix 7: Generate Ξ-based heatmap and trajectory (new analysis)
Use per-category mean Ξ vectors as "representations" and compute 6Γ6 cosine similarity matrix, same as exp2a_modified's heatmap. This removes additive template effects.
For each scale Γ representative layers:
- Save
delta_heatmap_{scale}_L{layer}.png - Save
delta_similarity_{scale}_L{layer}.csv
Also generate cross-layer trajectory plot for key pairs using Ξ-based similarity.
Note: Within-group pairs (e.g., left vs right) should show cosine β -1 if model discriminates well, since Ξ_left β -Ξ_right.
MODEL CONFIGURATIONS
MODEL_CONFIGS = {
'molmo': {
'vanilla': 'allenai/Molmo-7B-O-0924',
'80k': '/data/shared/Qwen/molmo/outputs/data_scale_exp_80k/unshared',
'400k': '/data/shared/Qwen/molmo/outputs/data_scale_exp_400k/unshared',
'800k': '/data/shared/Qwen/molmo/outputs/data_scale_exp_800k/unshared',
'2m': '/data/shared/Qwen/molmo/outputs/data_scale_exp_2m/unshared',
},
'nvila': {
'vanilla': '/data/shared/Qwen/mydisk/NVILA-Lite-2B',
'80k': '/data/shared/Qwen/mydisk/output/DATA/NVILA-Lite-2B-DATA_SCALE_EXP_80K-20251108_180221',
'400k': '/data/shared/Qwen/mydisk/output/DATA/NVILA-Lite-2B-DATA_SCALE_EXP_400K-20251108_180221',
'800k': '/data/shared/Qwen/mydisk/output/DATA/NVILA-Lite-2B-DATA_SCALE_EXP_800K-20251108_180221',
'2m': '/data/shared/Qwen/mydisk/output/DATA/NVILA-Lite-2B-DATA_SCALE_EXP_2M-20260205_003632',
'roborefer': '/data/shared/Qwen/mydisk/RoboRefer_model',
},
'qwen': {
'vanilla': 'Qwen/Qwen2.5-VL-3B-Instruct',
'80k': '/data/shared/Qwen/mydisk/output/Qwen/Qwen2.5-VL-3B-Instruct-data_scale_exp_80k-20251114_120221',
'400k': '/data/shared/Qwen/mydisk/output/Qwen/Qwen2.5-VL-3B-Instruct-data_scale_exp_400k-20251114_120221',
'800k': '/data/shared/Qwen/mydisk/output/Qwen/Qwen2.5-VL-3B-Instruct-data_scale_exp_800k-20251114_120221',
'2m': '/data/shared/Qwen/mydisk/output/Qwen/Qwen2.5-VL-3B-Instruct-data_scale_exp_2m-20260109_120517',
},
}
SHELL SCRIPT SPECIFICATIONS
Each model gets its own run script. Pattern:
Molmo: PYTHON="conda run --no-capture-output -n molmo python", scales=(vanilla 80k 400k 800k 2m), GPUS=(0 1 2 3 4)
NVILA: PYTHON="conda run --no-capture-output -n vila python", scales=(vanilla 80k 400k 800k 2m roborefer), GPUS=(0 1 2 3 4 5)
Qwen: PYTHON="/usr/bin/python3", scales=(vanilla 80k 400k 800k 2m), GPUS=(0 1 2 3 4)
Each script:
- Launches each scale on a separate GPU in parallel with
--no-auto-roborefer - Waits for all to finish, reports success/failure
- Runs
--mergemode to generate cross-scale plots - Logs go to
logs/{model}/{scale}.log
For swap_analysis, the merge step handles all cross-scale analyses automatically.
OUTPUT DIRECTORIES
- correct_filter results:
/data/shared/Qwen/experiments/correct_filter/results/{model_type}/ - swap_analysis results:
/data/shared/Qwen/experiments/swap_analysis/results/{model_type}/
EXTRACTOR CLASSES
Keep the existing extractor implementations (MolmoExtractor, NVILAExtractor, RoboReferExtractor, Qwen25VLExtractor) exactly as they are in the reference files. They work correctly. The key design:
- Base class registers hooks on target layers
- Hook captures last token hidden state during prefill only (seq_len > 1)
extract_and_predict()returns (hidden_states_dict, predicted_answer_text) in one forward pass- MolmoExtractor handles both native (config.yaml + model.pt) and HuggingFace formats
- NVILAExtractor uses
llavaimports with sys.path manipulation to avoid conflicts - RoboReferExtractor extends NVILAExtractor with different sys.path for RoboRefer
- Qwen25VLExtractor loads processor from base model for fine-tuned checkpoints
FIXES SPECIFIC TO swap_analysis.py (continued)
Fix 8: Category validity check + both-correct Ξ filtering
Some models predict the same answer for all samples in a category (e.g., always "close" for far questions), making Ξ analysis meaningless for that category.
a) Category-level validity check: After extracting predictions, compute per-category accuracy for both orig and swap. If either accuracy is below chance (50% for binary), mark that category as "unreliable" in logs and results. In the Ξ-based heatmap and consistency plots, either exclude unreliable categories or annotate them with a warning (e.g., hatching or asterisk).
b) Both-correct filtering: Add a --both-correct-only mode (default: compute BOTH filtered and unfiltered). For Ξ analysis (consistency, Ξ-based heatmap, cross-group alignment), also compute results using only pairs where BOTH orig and swap predictions are correct. This ensures Ξ vectors come from pairs where the model actually distinguishes the spatial relation.
Save results for both "all pairs" and "both-correct pairs" side by side. Generate comparison plots showing how filtering affects results. This is NOT the same as correct_filter experiment β we're not comparing correct vs incorrect representations, we're ensuring Ξ vectors are meaningful.
Report in summary:
- Per scale Γ category: n_total, n_both_correct, acc_orig, acc_swap, acc_both
- Flag categories where analysis may be unreliable
IMPORTANT NOTES
- Do NOT create separate post-hoc scripts (compute_swap_cosine.py, compute_delta_consistency.py). All analyses β swap cosine (cos(orig, swap)), delta consistency (within-category and sign-corrected), Ξ-based heatmaps, cross-group alignment, prediction stats plots β must be computed within swap_analysis.py itself. Per-scale analyses run during extraction. Cross-scale comparisons and any analyses that can be computed from saved intermediate files (NPZ, JSON) run during
--mergemode. The shell script should only need to call swap_analysis.py (once per scale in parallel, then once with --merge). - All scripts should support
--mergemode that skips extraction and only generates cross-scale comparison plots from saved per-scale results - swap_analysis:
--max-samples-per-categorydefault=200 - correct_filter: loads ALL samples (no limit), balanced sampling after correct/incorrect split
- Use
matplotlib.use('Agg')for headless environments - Always
torch.cuda.empty_cache()after each scale - Save intermediate results per-scale so parallel execution works (each GPU saves independently, merge combines)