Dataset Viewer
Unnamed: 0
int64 0
217
| id
int64 1,526,373,200B
1,546,707,910B
| tweet_text
stringlengths 76
140
| paper_reference
stringlengths 20
113
| like_count
int64 8
2.72k
|
---|---|---|---|---|
0 | 1,546,707,909,748,342,800 |
High-resource Language-specific Training for Multilingual Neural Machine Translation
abs: https://t.co/fYrwIPVpV2 https://t.co/b23EVZ6J5O
|
High-resource Language-specific Training for Multilingual Neural Machine Translation
| 11 |
1 | 1,546,669,556,789,387,300 |
Exploring Length Generalization in Large Language Models
abs: https://t.co/7Gphb7Q8jJ https://t.co/cCpLTSrXfR
|
Exploring Length Generalization in Large Language Models
| 17 |
2 | 1,546,667,351,885,729,800 |
LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action
abs:… https://t.co/lCk3P8KIwM
|
LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action
| 32 |
3 | 1,546,665,636,734,140,400 |
Scaling the Number of Tasks in Continual Learning
abs: https://t.co/F4HxAxGUpI https://t.co/cyvXSBKthk
|
Scaling the Number of Tasks in Continual Learning
| 47 |
4 | 1,546,707,909,748,342,800 |
High-resource Language-specific Training for Multilingual Neural Machine Translation
abs: https://t.co/fYrwIPVpV2 https://t.co/b23EVZ6J5O
|
High-resource Language-specific Training for Multilingual Neural Machine Translation
| 11 |
5 | 1,546,669,556,789,387,300 |
Exploring Length Generalization in Large Language Models
abs: https://t.co/7Gphb7Q8jJ https://t.co/cCpLTSrXfR
|
Exploring Length Generalization in Large Language Models
| 17 |
6 | 1,546,667,351,885,729,800 |
LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action
abs:… https://t.co/lCk3P8KIwM
|
LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action
| 32 |
7 | 1,546,665,636,734,140,400 |
Scaling the Number of Tasks in Continual Learning
abs: https://t.co/F4HxAxGUpI https://t.co/cyvXSBKthk
|
Scaling the Number of Tasks in Continual Learning
| 47 |
8 | 1,546,379,163,803,721,700 |
CausalAgents: A Robustness Benchmark for Motion Forecasting using Causal Relationships
abs: https://t.co/ozIrQ7gx68 https://t.co/gSGfnsZbji
|
CausalAgents: A Robustness Benchmark for Motion Forecasting using Causal Relationships
| 53 |
9 | 1,546,376,106,122,567,700 |
The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications
a… https://t.co/TOPpVPQbM8
|
The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications
| 11 |
10 | 1,546,375,104,262,725,600 |
Code Translation with Compiler Representations
abs: https://t.co/nTT3dmXH4c
method improves upon the state of the… https://t.co/wD4SozbilN
|
Code Translation with Compiler Representations
| 127 |
11 | 1,546,363,822,121,820,200 |
End-to-End Binaural Speech Synthesis
abs: https://t.co/tR86cSAjQO
project page: https://t.co/nB1iSV68U2
end-to-end… https://t.co/OTzfVZTFqb
|
End-to-End Binaural Speech Synthesis
| 58 |
12 | 1,545,243,820,496,937,000 |
Cross-Scale Vector Quantization for Scalable Neural Speech Coding
abs: https://t.co/AbE9rP0ApQ https://t.co/pZXUTNipgs
|
Cross-Scale Vector Quantization for Scalable Neural Speech Coding
| 25 |
13 | 1,545,240,373,328,593,000 |
Finding Fallen Objects Via Asynchronous Audio-Visual Integration
abs: https://t.co/mv9Rvl0hFA
project page:… https://t.co/N8l4zaP9bH
|
Finding Fallen Objects Via Asynchronous Audio-Visual Integration
| 33 |
14 | 1,545,228,848,391,938,000 |
Back to the Source: Diffusion-Driven Test-Time Adaptation
abs: https://t.co/5jmESOLQxG https://t.co/cI5UFyQI0B
|
Back to the Source: Diffusion-Driven Test-Time Adaptation
| 82 |
15 | 1,544,897,525,664,170,000 |
When does Bias Transfer in Transfer Learning?
abs: https://t.co/tf8FWyf8Ge https://t.co/0l6vy8RHXI
|
When does Bias Transfer in Transfer Learning?
| 135 |
16 | 1,544,865,587,343,630,300 |
Transformers are Adaptable Task Planners
abs: https://t.co/6lgFJD2Olt
TTP can be pre-trained on multiple preferenc… https://t.co/XrolcxlV22
|
Transformers are Adaptable Task Planners
| 82 |
17 | 1,544,853,650,316,599,300 |
Ultra-Low-Bitrate Speech Coding with Pretrained Transformers
abs: https://t.co/rYRe5N7Bqu https://t.co/zOsCY53r2s
|
Ultra-Low-Bitrate Speech Coding with Pretrained Transformers
| 34 |
18 | 1,544,721,641,049,145,300 |
CLEAR: Improving Vision-Language Navigation with Cross-Lingual, Environment-Agnostic Representations
abs:… https://t.co/6ng3UArKdE
|
CLEAR: Improving Vision-Language Navigation with Cross-Lingual, Environment-Agnostic Representations
| 52 |
19 | 1,544,521,037,274,046,500 |
An Empirical Study of Implicit Regularization in Deep Offline RL
abs: https://t.co/rCjHkQ2jwL https://t.co/8hJOsVA6D0
|
An Empirical Study of Implicit Regularization in Deep Offline RL
| 45 |
20 | 1,544,519,268,234,154,000 |
Offline RL Policies Should be Trained to be Adaptive
abs: https://t.co/kC7TPSOTt2 https://t.co/Ox2D028P33
|
Offline RL Policies Should be Trained to be Adaptive
| 34 |
21 | 1,544,491,557,293,854,700 |
Efficient Representation Learning via Adaptive Context Pooling
abs: https://t.co/zZzezhvbN7 https://t.co/xJoStGBSqp
|
Efficient Representation Learning via Adaptive Context Pooling
| 163 |
22 | 1,544,488,616,734,429,200 |
CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning
abs:… https://t.co/HqXmDpaUEh
|
CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning
| 102 |
23 | 1,544,485,593,991,811,000 |
How Much More Data Do I Need? Estimating Requirements for Downstream Tasks
abs: https://t.co/RNXT4IRIaL https://t.co/uJGrEfgaAv
|
How Much More Data Do I Need? Estimating Requirements for Downstream Tasks
| 230 |
24 | 1,544,483,235,542,990,800 |
Neural Networks and the Chomsky Hierarchy
abs: https://t.co/u6Jl2WvKMr
sota architectures, such as LSTMs and Trans… https://t.co/DyHnH8Q8z7
|
Neural Networks and the Chomsky Hierarchy
| 209 |
25 | 1,544,207,617,102,332,000 |
GlowVC: Mel-spectrogram space disentangling model for language-independent text-free voice conversion
abs:… https://t.co/kFYdKhrhSA
|
GlowVC: Mel-spectrogram space disentangling model for language-independent text-free voice conversion
| 19 |
26 | 1,544,201,186,739,458,000 |
Object Representations as Fixed Points: Training Iterative Refinement Algorithms with Implicit Differentiation
abs:… https://t.co/yL9kWlUYfs
|
Object Representations as Fixed Points: Training Iterative Refinement Algorithms with Implicit Differentiation
| 112 |
27 | 1,544,193,877,053,161,500 |
WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents
abs: https://t.co/8hZyMt90Rv
pro… https://t.co/eHzGN2GHqj
|
WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents
| 52 |
28 | 1,544,127,293,660,037,000 |
UserLibri: A Dataset for ASR Personalization Using Only Text
abs: https://t.co/0bug7OWU42 https://t.co/OMqJSGlqDx
|
UserLibri: A Dataset for ASR Personalization Using Only Text
| 9 |
29 | 1,543,981,460,964,708,400 |
LaserMix for Semi-Supervised LiDAR Semantic Segmentation
abs: https://t.co/SvqHy1y7LI
project page:… https://t.co/jbQtQiDbDy
|
LaserMix for Semi-Supervised LiDAR Semantic Segmentation
| 74 |
30 | 1,543,766,808,309,670,000 |
Rethinking Optimization with Differentiable Simulation from a Global Perspective
abs: https://t.co/trEcw4VZb2
proje… https://t.co/1UsI0q03IL
|
Rethinking Optimization with Differentiable Simulation from a Global Perspective
| 94 |
31 | 1,543,763,117,515,182,000 |
Visual Pre-training for Navigation: What Can We Learn from Noise?
abs: https://t.co/Rn5UGvvMMz
github:… https://t.co/eKeMSlBxVx
|
Visual Pre-training for Navigation: What Can We Learn from Noise?
| 134 |
32 | 1,543,759,817,449,390,000 |
DeepSpeed Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale
abs:… https://t.co/IbF6IdUDj7
|
DeepSpeed Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale
| 120 |
33 | 1,543,757,524,356,272,000 |
When Does Differentially Private Learning Not Suffer in High Dimensions?
abs: https://t.co/yws7BhoBaP https://t.co/bD2Gz6B3GU
|
When Does Differentially Private Learning Not Suffer in High Dimensions?
| 28 |
34 | 1,542,740,430,084,792,300 |
Implicit Neural Spatial Filtering for Multichannel Source Separation in the Waveform Domain
abs:… https://t.co/3cNoOlr5SD
|
Implicit Neural Spatial Filtering for Multichannel Source Separation in the Waveform Domain
| 31 |
35 | 1,542,713,456,268,304,400 |
Denoised MDPs: Learning World Models Better Than the World Itself
abs: https://t.co/CPwlF0soWZ
project page:… https://t.co/5BBwGXYZ2l
|
Denoised MDPs: Learning World Models Better Than the World Itself
| 98 |
36 | 1,542,712,192,746,782,700 |
Forecasting Future World Events with Neural Networks
abs: https://t.co/tD8F0ZC1rC
github: https://t.co/v8HZgye0ZH… https://t.co/eJaakYSUSw
|
Forecasting Future World Events with Neural Networks
| 77 |
37 | 1,542,709,853,516,431,400 |
Learning Iterative Reasoning through Energy Minimization
abs: https://t.co/WDLx1hKPqG
project page:… https://t.co/oDEClr0ho1
|
Learning Iterative Reasoning through Energy Minimization
| 125 |
38 | 1,542,709,029,964,849,200 |
Improving the Generalization of Supervised Models
abs: https://t.co/3CzEuuxvHt
project page: https://t.co/uSjiKvSMN8 https://t.co/ffUkpTL7Ng
|
Improving the Generalization of Supervised Models
| 189 |
39 | 1,542,325,850,036,752,400 |
RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and Out Distribution Robustness
abs:… https://t.co/iFAou98U0X
|
RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and Out Distribution Robustness
| 172 |
40 | 1,542,316,111,743,664,000 |
Masked World Models for Visual Control
abs: https://t.co/eZx53zuqnm
project page: https://t.co/hgZwrV3zO5
Can MAE… https://t.co/UfybFx81uj
|
Masked World Models for Visual Control
| 83 |
41 | 1,542,313,347,835,732,000 |
Beyond neural scaling laws: beating power law scaling via data pruning
abs: https://t.co/OFYkTt5b2d https://t.co/7SKXMClaR8
|
Beyond neural scaling laws: beating power law scaling via data pruning
| 164 |
42 | 1,542,312,585,768,435,700 |
3D-Aware Video Generation
abs: https://t.co/N64ARXFKMJ
project page: https://t.co/5MoGVKqItn https://t.co/uZdLIXWc1P
|
3D-Aware Video Generation
| 122 |
43 | 1,541,957,148,070,011,000 |
DayDreamer: World Models for Physical Robot Learning
abs: https://t.co/quyTQGcjEA
project page:… https://t.co/DD67NUzgJy
|
DayDreamer: World Models for Physical Robot Learning
| 182 |
44 | 1,541,948,699,559,006,200 |
Long Range Language Modeling via Gated State Spaces
abs: https://t.co/HEd2lwlGan https://t.co/tPOHv7dP0T
|
Long Range Language Modeling via Gated State Spaces
| 124 |
45 | 1,541,945,827,035,332,600 |
ProGen2: Exploring the Boundaries of Protein Language Models
abs: https://t.co/kelWMlhH8r
github:… https://t.co/nzvei5pMJR
|
ProGen2: Exploring the Boundaries of Protein Language Models
| 64 |
46 | 1,541,626,617,490,837,500 |
Multitask vocal burst modeling with ResNets and pre-trained paralinguistic Conformers
abs: https://t.co/QZLcoFOeSz https://t.co/315WfiVVRr
|
Multitask vocal burst modeling with ResNets and pre-trained paralinguistic Conformers
| 11 |
47 | 1,541,599,748,624,351,200 |
Programmatic Concept Learning for Human Motion Description and Synthesis
abs: https://t.co/uIoxGozwhD
project page:… https://t.co/MmCMQouLF7
|
Programmatic Concept Learning for Human Motion Description and Synthesis
| 83 |
48 | 1,541,592,312,094,101,500 |
Prompting Decision Transformer for Few-Shot Policy Generalization
abs: https://t.co/bD2f4SjRP6
project page:… https://t.co/ZfAxxx6zCu
|
Prompting Decision Transformer for Few-Shot Policy Generalization
| 48 |
49 | 1,541,590,513,241,006,000 |
Repository-Level Prompt Generation for Large Language Models of Code
abs: https://t.co/GG1YHoCQdf
github:… https://t.co/Z9fUO4r8sU
|
Repository-Level Prompt Generation for Large Language Models of Code
| 56 |
50 | 1,541,588,372,631,818,200 |
Your Autoregressive Generative Model Can be Better If You Treat It as an Energy-Based One
abs:… https://t.co/uJuKxO7XJC
|
Your Autoregressive Generative Model Can be Better If You Treat It as an Energy-Based One
| 121 |
51 | 1,541,226,747,533,922,300 |
PSP: Million-level Protein Sequence Dataset for Protein Structure Prediction
abs: https://t.co/yXdFTqRWF3
dataset… https://t.co/ZDNMPI2NVR
|
PSP: Million-level Protein Sequence Dataset for Protein Structure Prediction
| 94 |
52 | 1,541,219,433,259,176,000 |
Megapixel Image Generation with Step-Unrolled Denoising Autoencoders
abs: https://t.co/6fX9PseXBT
obtain FID score… https://t.co/HPodJ8xzPx
|
Megapixel Image Generation with Step-Unrolled Denoising Autoencoders
| 147 |
53 | 1,540,184,734,390,706,200 |
Walk the Random Walk: Learning to Discover and Reach Goals Without Supervision
abs: https://t.co/NO2vzfdYdS https://t.co/WoN73BzgeQ
|
Walk the Random Walk: Learning to Discover and Reach Goals Without Supervision
| 66 |
54 | 1,540,176,838,017,917,000 |
Offline RL for Natural Language Generation with Implicit Language Q Learning
abs: https://t.co/wYTtUgdryZ
project p… https://t.co/xS8JCODxwP
|
Offline RL for Natural Language Generation with Implicit Language Q Learning
| 43 |
55 | 1,540,161,095,930,880,000 |
MaskViT: Masked Visual Pre-Training for Video Prediction
abs: https://t.co/uhMEB6ashb
project page:… https://t.co/gbnxrCxUrc
|
MaskViT: Masked Visual Pre-Training for Video Prediction
| 147 |
56 | 1,540,156,319,923,060,700 |
The ArtBench Dataset: Benchmarking Generative Models with Artworks
abs: https://t.co/Zzq0A2i5ob
github:… https://t.co/SfQlvTLrk3
|
The ArtBench Dataset: Benchmarking Generative Models with Artworks
| 200 |
57 | 1,539,811,680,359,796,700 |
TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning
abs:… https://t.co/UArbr7zhRE
|
TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning
| 85 |
58 | 1,539,794,210,190,155,800 |
Jointist: Joint Learning for Multi-instrument Transcription and Its Applications
abs: https://t.co/xeuPUBcr01
proje… https://t.co/QmyCioKviJ
|
Jointist: Joint Learning for Multi-instrument Transcription and Its Applications
| 18 |
59 | 1,539,780,412,297,330,700 |
GEMv2: Multilingual NLG Benchmarking in a Single Line of Code
abs: https://t.co/pKS5mgoDkG
GEMv2 supports 40 docum… https://t.co/qMitHzTlO0
|
GEMv2: Multilingual NLG Benchmarking in a Single Line of Code
| 18 |
60 | 1,539,777,865,688,010,800 |
reStructured Pre-training
abs: https://t.co/mYm7qbt59N https://t.co/O5T3tSY4PL
|
reStructured Pre-training
| 32 |
61 | 1,539,672,920,456,298,500 |
Scaling Autoregressive Models for Content-Rich Text-to-Image Generation
paper: https://t.co/NKkTeHttLd
project page… https://t.co/CcKxsWPmjR
|
Scaling Autoregressive Models for Content-Rich Text-to-Image Generation
| 137 |
62 | 1,539,480,179,151,712,300 |
Intra-Instance VICReg: Bag of Self-Supervised Image Patch Embedding
abs: https://t.co/Bq3GUQywPV https://t.co/iLTaoXm0yC
|
Intra-Instance VICReg: Bag of Self-Supervised Image Patch Embedding
| 66 |
63 | 1,539,460,213,211,910,100 |
EnvPool: A Highly Parallel Reinforcement Learning Environment Execution Engine
abs: https://t.co/F4XkHLRxPi
github:… https://t.co/JiwSuMdkZH
|
EnvPool: A Highly Parallel Reinforcement Learning Environment Execution Engine
| 34 |
64 | 1,539,459,120,667,021,300 |
EpiGRAF: Rethinking training of 3D GANs
abs: https://t.co/RcY2vQr0NH
project page: https://t.co/kuXPKA00bZ https://t.co/CVCsseAS21
|
EpiGRAF: Rethinking training of 3D GANs
| 145 |
65 | 1,539,453,554,578,055,200 |
Unbiased Teacher v2: Semi-supervised Object Detection for Anchor-free and Anchor-based Detectors
abs:… https://t.co/noluSxtqzu
|
Unbiased Teacher v2: Semi-supervised Object Detection for Anchor-free and Anchor-based Detectors
| 72 |
66 | 1,539,435,374,103,220,200 |
Global Context Vision Transformers
abs: https://t.co/d6go0yv7fu
github: https://t.co/rUYFs09ReC
On ImageNet-1K dat… https://t.co/HJnw5wclQV
|
Global Context Vision Transformers
| 89 |
67 | 1,539,421,251,076,247,600 |
(Certified!!) Adversarial Robustness for Free!
abs: https://t.co/NTU6lioyII
show how to achieve sota certified adv… https://t.co/2VW1CDARya
|
(Certified!!) Adversarial Robustness for Free!
| 42 |
68 | 1,539,076,449,788,997,600 |
A Closer Look at Smoothness in Domain Adversarial Training
abs: https://t.co/GgKE9695vj
github:… https://t.co/33MX6TZhjt
|
A Closer Look at Smoothness in Domain Adversarial Training
| 97 |
69 | 1,538,710,356,444,471,300 |
Fast Finite Width Neural Tangent Kernel
abs: https://t.co/iY1lFoYMjA https://t.co/hWzzcCd5OZ
|
Fast Finite Width Neural Tangent Kernel
| 23 |
70 | 1,538,706,936,211,951,600 |
What do navigation agents learn about their environment?
abs: https://t.co/eXelV0REgZ
github:… https://t.co/TGSzEQ1v1c
|
What do navigation agents learn about their environment?
| 37 |
71 | 1,538,698,653,493,338,000 |
Bootstrapped Transformer for Offline Reinforcement Learning
abs: https://t.co/YiEY3uiTgL https://t.co/yle4hPgMmf
|
Bootstrapped Transformer for Offline Reinforcement Learning
| 137 |
72 | 1,538,695,457,550,921,700 |
Bridge-Tower: Building Bridges Between Encoders in Vision-Language Representation Learning
abs:… https://t.co/uLQLmf4l3M
|
Bridge-Tower: Building Bridges Between Encoders in Vision-Language Representation Learning
| 42 |
73 | 1,538,692,524,830,769,200 |
MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge
abs: https://t.co/etfGL1xnum
project pa… https://t.co/Fv1aLuEJSV
|
MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge
| 265 |
74 | 1,538,687,423,722,541,000 |
Lossy Compression with Gaussian Diffusion
abs: https://t.co/tw5YiZAN3B
implement a proof of concept and find that… https://t.co/4nvLjhIX4e
|
Lossy Compression with Gaussian Diffusion
| 102 |
75 | 1,538,686,489,491,648,500 |
NU-Wave 2: A General Neural Audio Upsampling Model for Various Sampling Rates
abs: https://t.co/4S8sBXq6Ko
a diffu… https://t.co/xd3eQ0ApQJ
|
NU-Wave 2: A General Neural Audio Upsampling Model for Various Sampling Rates
| 87 |
76 | 1,538,006,265,363,738,600 |
iBoot: Image-bootstrapped Self-Supervised Video Representation Learning
abs: https://t.co/dkZUd4QC81 https://t.co/pJFpxd7ckU
|
iBoot: Image-bootstrapped Self-Supervised Video Representation Learning
| 73 |
77 | 1,538,000,649,933,115,400 |
Neural Scene Representation for Locomotion on Structured Terrain
abs: https://t.co/68xY622f4w https://t.co/W3wTYp31f6
|
Neural Scene Representation for Locomotion on Structured Terrain
| 83 |
78 | 1,537,924,151,389,737,000 |
Programmatic Concept Learning for Human Motion Description and Synthesis
paper: https://t.co/Qemk23gUHX
project pag… https://t.co/ImHeYQC5vj
|
Programmatic Concept Learning for Human Motion Description and Synthesis
| 60 |
79 | 1,537,640,654,968,324,000 |
Spatially-Adaptive Multilayer Selection for GAN Inversion and Editing
abs: https://t.co/9tpvhXuaRw
project page:… https://t.co/XxpZg5PGke
|
Spatially-Adaptive Multilayer Selection for GAN Inversion and Editing
| 73 |
80 | 1,537,637,590,274,277,400 |
MoDi: Unconditional Motion Synthesis from Diverse Data
abs: https://t.co/YBV9jSUemo https://t.co/o1uvG18RSk
|
MoDi: Unconditional Motion Synthesis from Diverse Data
| 70 |
81 | 1,537,630,146,244,518,000 |
OmniMAE: Single Model Masked Pretraining on Images and Videos
abs: https://t.co/j9a3imUEJ6
single pretrained model… https://t.co/OiR2pY5emm
|
OmniMAE: Single Model Masked Pretraining on Images and Videos
| 146 |
82 | 1,537,622,879,386,456,000 |
SAVi++: Towards End-to-End Object-Centric Learning from Real-World Videos
abs: https://t.co/0MkpFJiUzM
using spars… https://t.co/x1Hvgf13qE
|
SAVi++: Towards End-to-End Object-Centric Learning from Real-World Videos
| 54 |
83 | 1,537,621,348,339,572,700 |
BYOL-Explore: Exploration by Bootstrapped Prediction
abs: https://t.co/xXQtolzjlP
BYOL-Explore achieves superhuman… https://t.co/uZvAbVd1Bb
|
BYOL-Explore: Exploration by Bootstrapped Prediction
| 79 |
84 | 1,537,618,457,365,303,300 |
Know your audience: specializing grounded language models with the game of Dixit
abs: https://t.co/T8d5ir8LDQ https://t.co/zSk5oR2F9D
|
Know your audience: specializing grounded language models with the game of Dixit
| 39 |
85 | 1,537,323,042,380,124,200 |
VCT: A Video Compression Transformer
abs: https://t.co/llH1L1ooKa
presented an elegantly simple transformer-based… https://t.co/ErovCWVDg3
|
VCT: A Video Compression Transformer
| 68 |
86 | 1,537,314,480,056,672,300 |
Contrastive Learning as Goal-Conditioned Reinforcement Learning
abs: https://t.co/6dv7PNn0qq
project page:… https://t.co/vRSdekL9If
|
Contrastive Learning as Goal-Conditioned Reinforcement Learning
| 77 |
87 | 1,537,288,570,880,368,600 |
Masked Siamese ConvNets
abs: https://t.co/YMG1O1ZZ5N https://t.co/LCVqVvFNfR
|
Masked Siamese ConvNets
| 83 |
88 | 1,537,265,816,609,116,200 |
Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone
abs: https://t.co/UgdYW9Cf1g
project page:… https://t.co/v2sTfFBq5r
|
Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone
| 89 |
89 | 1,537,257,011,657,814,000 |
Variable Bitrate Neural Fields
abs: https://t.co/Rp1t2LaQaW
project page: https://t.co/e2t8OrznxI https://t.co/6hw7OwbjZN
|
Variable Bitrate Neural Fields
| 162 |
90 | 1,537,254,679,188,488,200 |
A Unified Sequence Interface for Vision Tasks
abs: https://t.co/hXbVXdqHh1
explore a unified sequence interface fo… https://t.co/QG5UxvIgS4
|
A Unified Sequence Interface for Vision Tasks
| 50 |
91 | 1,537,252,952,666,087,400 |
Prefix Language Models are Unified Modal Learners
abs: https://t.co/BD4b3rQnKg https://t.co/2ofScnMIKN
|
Prefix Language Models are Unified Modal Learners
| 66 |
92 | 1,537,248,480,074,293,200 |
Diffusion Models for Video Prediction and Infilling
abs: https://t.co/MwfxwKXG4z
project page:… https://t.co/rnwB8eGFAs
|
Diffusion Models for Video Prediction and Infilling
| 103 |
93 | 1,536,879,515,883,946,000 |
ReCo: Retrieve and Co-segment for Zero-shot Transfer
abs: https://t.co/YwxkCGGyG1
project page:… https://t.co/WzVhmfhWCz
|
ReCo: Retrieve and Co-segment for Zero-shot Transfer
| 58 |
94 | 1,536,872,875,885,580,300 |
Object Scene Representation Transformer
abs: https://t.co/SUfNIBGAxt
project page: https://t.co/j8ebSAeM8v
scales… https://t.co/wa4vo3RJAK
|
Object Scene Representation Transformer
| 97 |
95 | 1,536,871,347,372,052,500 |
Adversarial Audio Synthesis with Complex-valued Polynomial Networks
abs: https://t.co/ekeC0nKIhR
APOLLO results in… https://t.co/sDcl2nydkt
|
Adversarial Audio Synthesis with Complex-valued Polynomial Networks
| 23 |
96 | 1,536,526,888,289,575,000 |
Large-Scale Retrieval for Reinforcement Learning
abs: https://t.co/fjzGvI3ZXB https://t.co/eFRHt8yXoq
|
Large-Scale Retrieval for Reinforcement Learning
| 86 |
97 | 1,536,522,198,785,183,700 |
GLIPv2: Unifying Localization and Vision-Language Understanding
abs: https://t.co/3GomrHG8xq
github:… https://t.co/bD68NZk4Lp
|
GLIPv2: Unifying Localization and Vision-Language Understanding
| 73 |
98 | 1,536,521,362,898,145,300 |
Self-critiquing models for assisting human evaluators
abs: https://t.co/8Zy2xfA5Qz https://t.co/qndZMS9zXa
|
Self-critiquing models for assisting human evaluators
| 19 |
99 | 1,536,515,535,202,136,000 |
Multi-instrument Music Synthesis with Spectrogram Diffusion
abs: https://t.co/UNDV4e7A6R
use a simple two-stage pr… https://t.co/AebIraqLF2
|
Multi-instrument Music Synthesis with Spectrogram Diffusion
| 87 |
End of preview. Expand
in Data Studio
YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/datasets-cards)
This dataset contains Twitter information from AK92501
- Downloads last month
- 10