SpaceLLaVA-7B
Run the colab above to analyze attention patterns for SpaceLLaVA-7B using this TransformerLens notebook
See how SpaceLLaVA-7B performs on Q-Spatial-Bench in the colab below
Visualize SpaceLLaVA-7B attention in the following colab
An experiment inspired by Linear Spatial World Models in Large Language Models
Check out these additional resources for mechanistic interpretability techniques compatible with LLaVA-1.5 based VLMs:
- Downloads last month
- 52
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for salma-remyx/spacellava-1.5-7b
Base model
liuhaotian/llava-v1.5-7b