Spaces:
Running
Running
metadata
title: README
emoji: 🐨
colorFrom: red
colorTo: purple
sdk: static
pinned: false
The Multimodal Reasoning Lab brings together researchers from Columbia, U Maryland, USC, and NYU.
We created the Zebra‑CoT dataset to enable interleaved vision–language reasoning and have developed state-of-the-art visual reasoning models built on this foundation.