File size: 362 Bytes
a6a134e
 
 
 
 
 
 
 
 
de7c9c9
673123f
 
1
2
3
4
5
6
7
8
9
10
11
12
---
title: README
emoji: 🐨
colorFrom: red
colorTo: purple
sdk: static
pinned: false
---

The Multimodal Reasoning Lab brings together researchers from Columbia, U Maryland, USC, and NYU. 

We created the Zebra‑CoT dataset to enable interleaved vision–language reasoning and have developed state-of-the-art visual reasoning models built on this foundation.