image/jpeg

SpaceLLaVA-7B

Open In Colab

Run the colab above to analyze attention patterns for SpaceLLaVA-7B using this TransformerLens notebook image/png

See how SpaceLLaVA-7B performs on Q-Spatial-Bench in the colab below

Open In Colab

Visualize SpaceLLaVA-7B attention in the following colab

Open In Colab

An experiment inspired by Linear Spatial World Models in Large Language Models

Open In Colab

Check out these additional resources for mechanistic interpretability techniques compatible with LLaVA-1.5 based VLMs:

Downloads last month
52
Safetensors
Model size
7.06B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for salma-remyx/spacellava-1.5-7b

Finetuned
(23)
this model

Dataset used to train salma-remyx/spacellava-1.5-7b