Ponimator: Unfolding Interactive Pose for Versatile Human-human Interaction Animation
Abstract
Ponimator uses conditional diffusion models to generate and synthesize interactive poses from motion capture data, enabling versatile interaction animation tasks.
Close-proximity human-human interactive poses convey rich contextual information about interaction dynamics. Given such poses, humans can intuitively infer the context and anticipate possible past and future dynamics, drawing on strong priors of human behavior. Inspired by this observation, we propose Ponimator, a simple framework anchored on proximal interactive poses for versatile interaction animation. Our training data consists of close-contact two-person poses and their surrounding temporal context from motion-capture interaction datasets. Leveraging interactive pose priors, Ponimator employs two conditional diffusion models: (1) a pose animator that uses the temporal prior to generate dynamic motion sequences from interactive poses, and (2) a pose generator that applies the spatial prior to synthesize interactive poses from a single pose, text, or both when interactive poses are unavailable. Collectively, Ponimator supports diverse tasks, including image-based interaction animation, reaction animation, and text-to-interaction synthesis, facilitating the transfer of interaction knowledge from high-quality mocap data to open-world scenarios. Empirical experiments across diverse datasets and applications demonstrate the universality of the pose prior and the effectiveness and robustness of our framework.
Community
We propose Ponimator (ICCV 2025), a generative framework that turns interactive poses into realistic human–human motion, supporting image, text, and pose-based interaction animation.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Text2Interact: High-Fidelity and Diverse Text-to-Two-Person Interaction Generation (2025)
- MoReact: Generating Reactive Motion from Textual Descriptions (2025)
- InterPose: Learning to Generate Human-Object Interactions from Large-Scale Web Videos (2025)
- InfinityHuman: Towards Long-Term Audio-Driven Human (2025)
- MoSA: Motion-Coherent Human Video Generation via Structure-Appearance Decoupling (2025)
- VividAnimator: An End-to-End Audio and Pose-driven Half-Body Human Animation Framework (2025)
- PersonaAnimator: Personalized Motion Transfer from Unconstrained Videos (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper