video-to-3d
nielsr HF Staff commited on
Commit
10ac3d6
·
verified ·
1 Parent(s): bc4ff6b

Improve model card: Add pipeline tag, links, and usage

Browse files

This PR enhances the model card by:
- Adding the `pipeline_tag: video-to-3d` to improve discoverability.
- Linking to the official paper: [Trace Anything: Representing Any Video in 4D via Trajectory Fields](https://huggingface.co/papers/2510.13802).
- Linking to the project page: [https://trace-anything.github.io/](https://trace-anything.github.io/).
- Linking to the GitHub repository: [https://github.com/ByteDance-Seed/TraceAnything](https://github.com/ByteDance-Seed/TraceAnything).
- Including a quick usage example directly from the GitHub repository.

Files changed (1) hide show
  1. README.md +56 -3
README.md CHANGED
@@ -1,3 +1,56 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ pipeline_tag: video-to-3d
4
+ ---
5
+
6
+ # Trace Anything: Representing Any Video in 4D via Trajectory Fields
7
+
8
+ This repository contains the official implementation of the paper [Trace Anything: Representing Any Video in 4D via Trajectory Fields](https://huggingface.co/papers/2510.13802).
9
+
10
+ Trace Anything proposes a novel approach to represent any video as a Trajectory Field, a dense mapping that assigns a continuous 3D trajectory function of time to each pixel in every frame. The model predicts the entire trajectory field in a single feed-forward pass, enabling applications like goal-conditioned manipulation, motion forecasting, and spatio-temporal fusion.
11
+
12
+ Project Page: [https://trace-anything.github.io/](https://trace-anything.github.io/)
13
+ Code: [https://github.com/ByteDance-Seed/TraceAnything](https://github.com/ByteDance-Seed/TraceAnything)
14
+
15
+ ## Overview
16
+ <div align="center">
17
+ <img src="https://huggingface.co/depth-anything/trace-anything/resolve/main/assets/teaser.png" width="100%"/>
18
+ </div>
19
+
20
+ ## Installation
21
+ For detailed installation instructions, please refer to the [GitHub repository](https://github.com/ByteDance-Seed/TraceAnything#setup).
22
+
23
+ ## Sample Usage
24
+ To run inference with the Trace Anything model, first, download the pretrained weights (see GitHub for details). Then, you can use the provided script as follows:
25
+
26
+ ```bash
27
+ # Download the model weights to checkpoints/trace_anything.pt
28
+ # Place your input video/image sequence in examples/input/<scene_name>/
29
+
30
+ python scripts/infer.py \
31
+ --input_dir examples/input \
32
+ --output_dir examples/output \
33
+ --ckpt checkpoints/trace_anything.pt
34
+ ```
35
+ Results, including 3D control points and confidence maps, will be saved to `<output_dir>/<scene>/output.pt`.
36
+
37
+ ## Interactive Visualization
38
+ An interactive 3D viewer is available to explore the generated trajectory fields. Run it using:
39
+ ```bash
40
+ python scripts/view.py --output examples/output/<scene>/output.pt
41
+ ```
42
+ For more options and remote usage, check the [GitHub repository](https://github.com/ByteDance-Seed/TraceAnything#interactive-visualization-%EF%B8%8F).
43
+
44
+ ## Citation
45
+ If you find this work useful, please consider citing the paper:
46
+ ```bibtex
47
+ @misc{liu2025traceanythingrepresentingvideo,
48
+ title={Trace Anything: Representing Any Video in 4D via Trajectory Fields},
49
+ author={Xinhang Liu and Yuxi Xiao and Donny Y. Chen and Jiashi Feng and Yu-Wing Tai and Chi-Keung Tang and Bingyi Kang},
50
+ year={2025},
51
+ eprint={2510.13802},
52
+ archivePrefix={arXiv},
53
+ primaryClass={cs.CV},
54
+ url={https://arxiv.org/abs/2510.13802},
55
+ }
56
+ ```