---
license: cc-by-nc-4.0
pipeline_tag: video-to-3d
---
# Trace Anything: Representing Any Video in 4D via Trajectory Fields
This repository contains the official implementation of the paper [Trace Anything: Representing Any Video in 4D via Trajectory Fields](https://huggingface.co/papers/2510.13802).
Trace Anything proposes a novel approach to represent any video as a Trajectory Field, a dense mapping that assigns a continuous 3D trajectory function of time to each pixel in every frame. The model predicts the entire trajectory field in a single feed-forward pass, enabling applications like goal-conditioned manipulation, motion forecasting, and spatio-temporal fusion.
Project Page: [https://trace-anything.github.io/](https://trace-anything.github.io/)
Code: [https://github.com/ByteDance-Seed/TraceAnything](https://github.com/ByteDance-Seed/TraceAnything)
## Overview
## Installation
For detailed installation instructions, please refer to the [GitHub repository](https://github.com/ByteDance-Seed/TraceAnything#setup).
## Sample Usage
To run inference with the Trace Anything model, first, download the pretrained weights (see GitHub for details). Then, you can use the provided script as follows:
```bash
# Download the model weights to checkpoints/trace_anything.pt
# Place your input video/image sequence in examples/input//
python scripts/infer.py \
--input_dir examples/input \
--output_dir examples/output \
--ckpt checkpoints/trace_anything.pt
```
Results, including 3D control points and confidence maps, will be saved to `//output.pt`.
## Interactive Visualization
An interactive 3D viewer is available to explore the generated trajectory fields. Run it using:
```bash
python scripts/view.py --output examples/output//output.pt
```
For more options and remote usage, check the [GitHub repository](https://github.com/ByteDance-Seed/TraceAnything#interactive-visualization-%EF%B8%8F).
## Citation
If you find this work useful, please consider citing the paper:
```bibtex
@misc{liu2025traceanythingrepresentingvideo,
title={Trace Anything: Representing Any Video in 4D via Trajectory Fields},
author={Xinhang Liu and Yuxi Xiao and Donny Y. Chen and Jiashi Feng and Yu-Wing Tai and Chi-Keung Tang and Bingyi Kang},
year={2025},
eprint={2510.13802},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2510.13802},
}
```