Dataset Viewer
Auto-converted to Parquet
Search is not available for this dataset
audio
audioduration (s)
9.54
10.1
label
class label
101 classes
0100_ConQRet Benchmarking Fine-Grained Evaluation of Retrieval Augmented Argumentation with LLM Judges_NAACL_2025
1101_A versatile informative diffusion model for single-cell ATAC-seq data generation and analysis
210_A Simple Image Segmentation Framework via In-Context Examples_NIPS_2024
311_A SARS-CoV-2 Interaction Dataset and VHH Sequence Corpus for Antibody Language Models_NIPS_2024
412_A Two-stage Universal Speech Enhancement System for URGENT 2024 Callenge_NIPS_2024
513_A generalized neural tangent kernel for surrogate gradient learning_NIPS_2024
614_Why did the Model Fail Attributing Model Performance Changes to Distribution Shifts_ICML_2023
715_A Closer Look at Self-Supervised Lightweight Vision Transformers_ICML_2023
816_A Closer Look at the Intervention Procedure of Concept Bottleneck Models_ICML_2023
917_A Complete Expressiveness Hierarchy for Subgraph GNNs via Subgraph Weisfeiler-Lehman Tests_ICML_2023
1018_A Conditional Normalizing Flow for Accelerated Multi-Coil MR Imaging_ICML_2023
1119_A Connection between One-Step RL and Critic Regularization in Reinforcement Learning_ICML_2023
121_Ad Auctions for LLMs via Retrieval Augmented Generation_NIPS_2024
1320_A Coupled Flow Approach to Imitation Learning_ICML_2023
1421_A Critical Revisit of Adversarial Robustness in 3D Point Cloud Recognition with Diffusion-Driven Purification_ICML_2023
1522_A Kernel Stein Test of Goodness of Fit for Sequential Models_ICML_2023
1623_A Kernel-Based View of Language Model Fine-Tuning_ICML_2023
1724_A Law of Robustness beyond Isoperimetry_ICML_2023
1825_A Model Based Method for Minimizing CVaR_ICML_2023
1926_A Modern Look at the Relationship between Sharpness and Generalization_ICML_2023
2027_3D Equivariant Diffusion for Target-Aware Molecule Generation and Affinity Prediction_ICLR_2023
2128_Advancing Spiking Neural Networks for Sequential Modeling with Central Pattern Generators_ICLR_2023
2229_Addressing bias in online selection with limited budget of comparisons_ICLR_2023
232_FL2 Overcoming Few Labels in Federated Semi-Supervised Learning_NIPS_2024
2430_A GNN-Guided Predict-and-Search Framework for Mixed-Integer Linear Programming_ICLR_2023
2531_A General Framework For Proving The Equivariant Strong Lottery Ticket Hypothesis_ICLR_2023
2632_Adaptive Passive-Aggressive Framework for Online Regression with Side Information_ICLR_2023
2733_A Holistic View of Noise Transition Matrix in Deep Learning and Beyond_ICLR_2023
2834_Adaptive Group Robust Ensemble Knowledge Distillation_ICLR_2023
2935_A Mixture-of-Expert Approach to RL-based Dialogue Management_ICLR_2023
3036_A Simple Yet Powerful Deep Active Learning With Snapshots Ensembles_ICLR_2023
3137_A Stable and Scalable Method for Solving Initial Value PDEs with Neural Networks_ICLR_2023
3238_A Statistical Framework for Personalized Federated Learning and Estimation Theory Algorithms and Privacy_ICLR_2023
3339_A Theoretical Understanding of Vision Transformers Learning Generalization and Sample Complexity_ICLR_2023
343_3-in-1 2D Rotary Adaptation for Efficient Finetuning Efficient Batching and Composability_NIPS_2024
3540_AGRO Adversarial discovery of error-prone Groups for Robust Optimization_ICLR_2023
3641_Joint Demosaicing and Deghosting of Time-Varying Exposures for Single-Shot HDR Imaging_ICCV_2023
3742_UnLoc A Unified Framework for Video Localization Tasks_ICCV_2023
3843_Masked Retraining Teacher-Student Framework for Domain Adaptive Object Detection_ICCV_2023
3944_S-VolSDF Sparse Multi-View Stereo Regularization of Neural Implicit Surfaces_ICCV_2023
4045_StyleInV A Temporal Style Modulated Inversion Network for Unconditional Video Generation_ICCV_2023
4146_Video State-Changing Object Segmentation_ICCV_2023
4247_CLR Channel-wise Lightweight Reprogramming for Continual Learning_ICCV_2023
4348_Segment Anything_ICCV_2023
4449_Convex Decomposition of Indoor Scenes_ICCV_2023
454_Active anytime-valid risk controlling prediction sets_NIPS_2024
4650_N2F2 Hierarchical Scene Understanding with Nested Neural Feature Fields_ECCV_2024
4751_On the Topology Awareness and GeneralizationPerformance of Graph Neural Networks_ECCV_2024
4852_FreestyleRet Retrieving Images fromStyle-Diversified Queries_ECCV_2024
4953_TAG Text Prompt Augmentation for Zero-Shot Out-of-Distribution Detection_ECCV_2024
5054_NeRMo Learning Implicit Neural Representations for 3D Human Motion Prediction_ECCV_2024
5155_3DEgo 3D Editing on the Go_ECCV_2024
5256_Embedding-Free Transformer with Inference Spatial Reduction for Efficient Segmentation_ECCV_2024
5357_OvSW Overcoming Silent Weights for Accurate Binary Neural Networks_ECCV_2024
5458_OvSGTR Fully Open-vocabulary Scene Graph Generation_ECCV_2024
5559_Hyperion - A fast versatile symbolic GBP framework for Continuous-Time SLAM_ECCV_2024
565_AgentBoard An Analytical Evaluation Board of Multi-turn LLM Agents_NIPS_2024
5760_LatentEditor Text Driven Local Editing of 3D Scenes_ECCV_2024
5861_denoiSplit a method for joint microscopy image splitting and unsupervised denoising_ECCV_2024
5962_Learning Natural Consistency Representation for Face Forgery Video Detection_ECCV_2024
6063_Robust Calibration of Large Vision-Language Adapters_ECCV_2024
6164_Dissolving Is Amplifying Towards Fine-Grained Anomaly Detection_ECCV_2024
6265_HiPose Hierarchical Binary Surface Encoding for RGB-D 6DoF Object Pose Estimation_CVPR_2024
6366_VicTR Video-conditioned Text Representations for Activity Recognition_CVPR_2024
6467_ArGue Attribute-Guided Prompt Tuning for Vision-Language Models_CVPR_2024
6568_ProMarkProactive Diffusion Watermarking for Causal Attribution_CVPR_2024
6669_Spatio-Focal Bidirectional Disparity Estimation from a Dual-Pixel Image_CVPR_2023
676_A Flexible Equivariant Framework for Subgraph GNNs via Graph Products and Graph Coarsening_NIPS_2024
6870_Polarimetric iToF Measuring High-Fidelity Depth through Scattering Media_CVPR_2023
6971_Efficient Semantic Segmentation by Altering Resolutions for Compressed Videos_CVPR_2023
7072_TimeBalance Temporally-Invariant and Temporally-Distinctive VideoRepresentations for Semi-Supervised Action Recognition_CVPR_2023
7173_Vid2Seq Large-Scale Pretraining of a Visual Language Model for Dense Video Captioning_CVPR_2023
7274_Object Detection with Self-Supervised Scene Adaptation_CVPR_2023
7375_Global Vision Transformer Pruning with Hessian-Aware Saliency_CVPR_2023
7476_Reliability in Semantic Segmentation Are we on the right Track_CVPR_2023
7577_RoDynRF Robust Dynamic Radiance Fields_CVPR_2023
7678_All Things ViTs Understanding and Interpreting Attention in Vision_CVPR_2023
7779_TarViS A Unified Approach for Target-based Video Segmentation_CVPR_2023
787_Agent Planning with World Knowledge Model_NIPS_2024
7980_Seeing What You Said Talking Face Generation Guided by a Lip Reading Expert_CVPR_2023
8081_Speech Translation with Speech Foundation Models and LLMs_ACL_2024
8182_Moûsai Efficient Text-to-Music Diffusion Models_ACL_2024
8283_Distilling Robustness into Natural Language Inference Models with Domain-Targeted Augmentation_ACL_2024
8384_Injecting Salespersons Dialogue Strategies in LLMs with CoT Reasoning_ACL_2024
8485_Contrastive Instruction Tuning_ACL_2024
8586_The Belebele Benchmark a Parallel Reading Comprehension Dataset in 122 Language Variants_ACL_2024
8687_Language Models can Exploit Cross-Task In-context Learning for Data-Scarce Novel Tasks_ACL_2024
8788_LLM Self-Correction with DECRIM_EMNLP_2024
8889_PREDICT Multi-Agent-based Debate Simulation for Generalized Hate Speech Detection_EMNLP_2024
898_A Label is Worth A Thousand Images in Dataset Distillation_NIPS_2024
9090_MOSEL 950k Hours of Speech Data for Open-Source SFM Training on EU Languages_EMNLP_2024
9191_AnaloBench Benchmarking the Identification of Abstract and Long-context Analogies_EMNLP_2024
9292_Whats Mine Becomes Yours Detecting Context-Dependent Paraphrases in News Interviews_EMNLP_2024
9393_POSIX A Prompt Sensitivity Index for Large Language Models_EMNLP_2024
9494_Automatic Generation of Model and Data Cards A Step Towards Responsible AI_NAACL_2024
9595_Exploring the Trade-off Between Model Performance and Explanation Plausibility of Text Classifiers Using Human Rationales_NAACL_2024
9696_SemRoDe Macro Adversarial Training to Learn Representations that are Robust to Word-Level Attacks_NAACL_2024
9797_mOthello When Do Cross-Lingual Representation Alignment and Cross-Lingual Transfer Emerge in Multilingual Models_NAACL_2024
9898_Towards Lifelong Dialogue Agents via Timeline-based Memory Management_NAACL_2025
9999_Upsample or Upweight Balanced Training on Heavily Imbalanced Datasets_NAACL_2025
End of preview. Expand in Data Studio

Paper2Video: Automatic Video Generation From Scientifuc Papers

📃Arxiv | 🌐 Project Page | 💻Github

Dataset Description

The Paper2Video Benchmark includes 101 curated paper–video pairs spanning diverse research topics. Each paper averages about 13.3K words, 44.7 figures, and 28.7 pages, providing rich multimodal long-document inputs. Presentations contain on average 16 slides and run for about 6 minutes 15 seconds, with some reaching up to 14 minutes. Rather than focusing only on video generation, Bench is designed to evaluate long-horizon agentic tasks that require integrating text, figures, slides, and spoken presentations.

Dataset Structure

This repository contains two main components:

  • Excel file with metadata and presentation links
    Each entry includes:

    • paper: the title of the paper
    • paper_link: the URL of the paper (e.g., PDF or LaTeX source)
    • presentation_link: the URL of the author-recorded presentation video (some entries also include original slides)
    • conference: the conference where the paper was published
    • year: the publication year of the paper
  • Author identity file
    This file contains author information, including voice samples and images, which can be used for tasks such as personalized talk synthesis or avatar generation.
    Each folder includes:

    • ref_img.png: the identity image of the author
    • ref_audio.wav: the identity voice sample of the author

Ethics

The author identity data (images and voice samples) provided in this repository are strictly for research purposes only. They must not be used for any commercial applications, deepfake creation, impersonation, or other misuse that could harm the rights, privacy, or reputation of the individuals. All usage should comply with ethical guidelines and respect the identity and intellectual property of the authors.

Citation

BibTeX:

@misc{paper2video,
      title={Paper2Video: Automatic Video Generation from Scientific Papers}, 
      author={Zeyu Zhu and Kevin Qinghong Lin and Mike Zheng Shou},
      year={2025},
      eprint={2510.05096},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2510.05096}, 
}
Downloads last month
127