Dataset Viewer
Auto-converted to Parquet
Paper_ID
stringlengths
10
10
Question
stringlengths
201
1.81k
ocr_output
stringlengths
252
54k
rEQ8OiBxbZ
Could you elaborate on how the local structures are reconstructed? What serves as the input for this process: a single embedding from the TokenGT-3D output, or a collection of embeddings from local structure segmentations within a single molecule?
3D Molecular Pretraining via Localized Geometric Generation Anonymous authors Paper under double-blind review Abstract Self-supervised learning on 3D molecular structures has gained prominence in AI-driven drug discovery due to the high cost of annotating biochemical data. However, few have studied the selection of proper modeling semantic units within 3D molecular data, which is critical for an expressive pre-trained model as verified in natural language processing and computer vision. In this study, we introduce Localized Geometric Generation (LEGO), a novel approach that treats tetrahedrons within 3D molecular structures as fundamental modeling blocks, leveraging their simplicity in three-dimension and their prevalence in molecular structural patterns such as carbon skeletons and functional groups. Inspired by masked language/image modeling, LEGO perturbs a portion of tetrahedrons and learns to reconstruct them during pretraining. The reconstruction of the noised local structures can be divided into a two-step process, namely spatial orientation prediction and internal arrangement generation. First, we predict the global orientation of the noised local structure within the whole molecule, equipping the model with positional information for these foundational components. Then, we geometrically reconstruct the internal arrangements of the noised local structures revealing their functional semantics. To address the atom-bond inconsistency problem in previous denoising methods and utilize the prior of chemical bonds, we propose to model the graph as a set of nodes and edges and explicitly generate the edges during pre-training. In this way, LEGO exploits the advantages of encoding structural geometry features as well as leveraging the expressiveness of self-supervised learning. Extensive experiments on molecular quantum and biochemical property prediction tasks demonstrate the effectiveness of our approach. 1 Introduction Understanding 3D molecular structures is crucial for various tasks in drug discovery, such as molecular property prediction [Wu et al., 2018; Hu et al., 2021; Chmiela et al., 2023], binding affinity prediction [Öztürk et al., 2018; Ru et al., 2022], and docking-based generation [Ma et al., 2021; Yang et al., 2021]. In recent years, self-supervised learning on 3D molecular structures has been extensively explored to learn from large collections of unlabeled compounds, which helps overcome the costly and time-consuming process of annotating biochemical properties. As is demonstrated in natural language processing and computer vision, a careful selection of minimal semantic building blocks is critical for developing an expressive and robust pretrained model. By providing well-structured units, the model can effectively identify underlying patterns and extract meaningful semantics from data compositions during pretraining. However, few existing 3D molecular pretraining methods have studied this aspect. Existing 3D molecular pretraining methods fall into two categories: representation-level and structure-level. Representation-level methods aim to enhance 2D molecular representation by leveraging information from 3D molecular structures through contrastive learning [Liu et al., 2021a; Stärk et al., 2022]. Such methods use 3D molecular structures only at the encoding stage and fail to model inherent structural features through self-supervised training. Structure-level methods address this limitation by developing pre-training tasks of coordinate denoising, where independent noise is added to the coordinates of all atoms in the graph and the model is trained to reconstruct the original atomic positions [Zaidi et al., 2022; Liu et al., 2022b; Zhou et al., 2023; Jiao et al., 2023; Feng et al., 2023]. However, from a chemical perspective, an atom alone can hardly serve as a functional... Figure 1: Local structures consisting of a central atom and its one-hop neighbors form a highly prevalent motif in molecules, which underlies (a) carbon backbones, and (b) functional groups, and etc. unit in molecules. Therefore, atom-wise denoising provides limited improvement in the model’s understanding of functional substructures. In this paper, we focus on this open issue and propose a novel pretraining approach as an initial exploration. Our method, called Localized Geometric Generation (LEGO), treats tetrahedrons within 3D molecular structures as fundamental building blocks and tailors two pretraining tasks to learn the semantics. There are two key conceptual motivations behind this design: Geometrically, the tetrahedron is the simplest polyhedron that can be constructed in 3D Euclidean space, serving as the base case for more complex polyhedra. This structural simplicity and primitiveness aligns with the ubiquity of the tetrahedral motif in chemistry: a central atom along with its one-hop neighbors forms a highly prevalent local structure in molecules, which underlies carbon backbones, functional groups, and more (Fig 1). Therefore, tetrahedrons can be considered an excellent basic semantic unit for 3D molecular modeling from both geometry and chemistry. Inspired by masked language/image modeling techniques (Devlin et al., 2019; Dosovitskiy et al., 2020), LEGO introduces perturbations to a portion of tetrahedrons in a 3D molecular structure and learns to reconstruct them during pretraining. In particular, we begin by segmenting a 3D molecular structure into a non-overlapping stack of one-hop local tetrahedral structures. Subsequently, we add noise or apply masks to part of the segmented local structures. The reconstruction of the perturbed local structures involves two steps: global orientation prediction and local structure generation. During the orientation prediction step, we predict the spherical coordinates of the center of mass (CoM) for each masked tetrahedron. This prediction provides positional information about local structures and their relationships within the whole molecule. While for the local generation, we introduce a geometric generation task to accurately reconstruct atom arrangements within each masked tetrahedron, which focuses on learning the pattern and semantic of the unit itself. By incorporating these steps, LEGO is able to learn both global and local features of 3D molecular geometry in a self-supervised manner. Although the design mentioned above allows for the explicit modeling of geometric features in 3D molecular data, it is important to note that most existing 3D molecular graph models are based on nodes, where edges are represented as additional node features and not explicitly modeled. Such backbones can lead to an atom-bond inconsistency problem during the denoising-generation process generation (Peng et al., 2023). To be specific, when generating 3D structures, atom-based networks first produce atom positions and add the chemical bonds in a post-processing manner. This sequential approach may result in intermediate atom positions that are not feasible for forming bonds, leading to unrealistic topologies like extra-large ring or violate atom valency constraints. This atom-bond inconsistency presents a challenge for our pretraining approach, which focuses on reconstructing local molecular structures. In fact, bonds are critical abstract concepts in molecules as they quantify distance-dependent interaction forces between atoms and encoding key chemical semantics, and therefore play a critical role in modeling molecular local structures. To address the inconsistency, we propose modeling the molecular graph as a set of nodes and edges. During pretraining, LEGO generates the edges explicitly, allowing it to learn the significant chemical and geometric priors embedding in the bonding patterns. The contributions of this work can be summarized as follows: • We propose a novel self-supervised learning method for 3D molecular structures. Our approach treats tetrahedrons as the fundamental building blocks within 3D structures and introduces two pretraining tasks that enable the learning of local and global semantics in a geometric manner. • We address the atom-bond inconsistency problem encountered in previous denoising methods by modeling the molecular graph as a set of nodes and edges. This representation leverages the prior knowledge of chemical bonds, facilitating the accurate representation of molecular structures. • We demonstrate the effectiveness of our method through comprehensive experiments. We pretrain LEGO on a large-scale dataset and evaluate the pretrained model on biochemical and quantum property prediction tasks. The results show that our approach can well capture the molecular functional semantics and can achieve comparing results to Transformer variants with sophisticated graph-specific inductive bias. 2 RELATED WORKS 3D Molecular Structure Modeling. 3D modeling of molecular structures has been extensively explored in recent years, enabled by advancements in graph neural networks (GNN) (Wu et al., 2020; Han et al., 2022). Early work by SchNet (Schütt et al., 2017) incorporates atomic distances into continuous-filter convolutional layers to capture local atomic correlations. DimeNet (Klicpera et al., 2020) pioneers the incorporation of bond angles and directionality into vanilla GNNs, demonstrating improved performance. SphereNet (Liu et al., 2021b) and ComENet (Wang et al., 2022) introduce spherical messages to build more informative representations. To encode 3D equivariance as an inductive bias grounded in group theory, Tensor Field Networks (Thomas et al., 2018), SE(3)-Transformers (Fuchs et al., 2020) and NequiP (Batzner et al., 2022) employ tensor products, while PaiNN (Schütt et al., 2021) and EGNN (Satorras et al., 2021) adopt equivariant message passing. Beyond message passing neural networks (MPNN), the powerful transformer architecture (Vaswani et al., 2017) has also been explored for graph-structured data. Dwivedi & Bresson (2020) first introduces a fully-connected transformer for graphs and uses Laplacian eigenvectors as node positional encoding. GRPE (Park et al., 2022) and Graphormer (Ying et al., 2021) define structural positional encodings based on node topology, node-edge interaction and 3D distances. Besides positional encodings, GraphTrans (Wu et al., 2021), EGT (Hussain et al., 2022) and GraphGPS (Rampášek et al., 2022) propose hybrid architectures with stacked MPNN layers before the global attention layer. Notably, TokenGT (Kim et al., 2022) demonstrated that standard Transformers without graph-specific modifications can also achieve promising results in graph learning. Despite the success by directly incorporating 3D features into the model input, there remains a need to develop pretraining paradigms for 3D molecular structures that can learn semantic features in a self-supervised manner. Pretraining on 3D Molecular Structures. Existing pre-training methods for 3D molecular structures can be categorized into two types: representation-level and structure-level. Representation-level methods use separate encoders to embed 2D graphs and 3D structures to obtain embeddings from two views, then perform contrastive learning (Stark et al., 2022) or generative self-supervised learning (Liu et al., 2021a) on the two embeddings. Such methods focus on the 2D graph representation and treat 3D information as a complement to its 2D counterpart, ignoring spatial features that are more informative in determining molecular properties. Structure-level denoising tasks fill this gap by involving geometric elements in pretraining tasks. Liu et al. (2022b), Zaidi et al. (2022), Zhou et al. (2023), and Feng et al. (2023) employ denoising tasks on atomic coordinates and explore how the scale and distribution of the added noise impact the results. Zhu et al. (2022) proposes a masked modeling by predicting coordinates of masked atoms using corresponding 2D features. GEM (Fang et al., 2022) and 3D-PGT (Wang et al., 2023) use geometric features as pretraining objectives, but they implement a random masking. Different from these studies, we underscores the modeling of local semantic units in 3D molecular pretraining. 3 METHOD 3.1 MOTIVATION Our objective is to develop a segmentation approach that effectively decomposes 3D molecular structures into suitable units for representation learning. These units need to strike a balance between two crucial factors. On one hand, the units should encapsulate the critical details related to the local molecular environment in a way that downstream models can further analyze for property predictions. On the other hand, overly complex or molecule-specific representations could limit the applicability of Figure 2: Overview of LEGO. **I.** Based on non-terminal atoms, we segment 3D molecular structures into building blocks of one-hop local structures (LS). We perturb a portion of the LS by adding noise to atomic positions and masking the edge features. **II.** We pre-train LEGO by geometrically reconstructing the perturbed local structures in two stages. the approach across different chemical spaces. Therefore, we aim to identify structurally meaningful yet simple decompositions that contain rich semantics similar to how tokens and patches serve as universal elements for natural language processing and computer vision models. Our proposed solution is to take tetrahedrons (one-hop local structures in general cases) as the fundamental building blocks. Geometrically, the tetrahedron is the simplest polyhedron that can be constructed in 3D space, serving as the base case for more complex polyhedra. This structural simplicity aligns with the widespread occurrence of the tetrahedral motif in chemical compounds, as depicted in Figure 1. In carbon skeletons and many functional groups, tetrahedral centers with a maximum valency of four allow diverse atoms to form intricate molecular structures while minimizing spatial constraints. It is worth pointing out that the local structure of actual molecules may not always conform to a standard tetrahedral shape, and our segmentation strategy is adjusted to accommodate this variability. For center atoms with fewer than four neighbors, like the C,N,O in Fig 1(b), we simply treat the ketone, amino or the ether as a degraded tetrahedra. While for instances where center atoms form more than four bonds, such as sulfur and phosphorus, we incorporate all one-hop atoms as part of the local structure. Additionally, cyclic structures like benzene are handled by selecting non-adjacent carbons to represent the ring through a combination of its triangular fragments. By retaining this adaptive nature for atypical cases while concentrating on tetrahedra, the algorithm aims to balance simplicity and practical applicability across diverse chemical spaces. ### 3.2 TokenGT and Its 3D Extension Most existing graph neural networks typically adopt an atom-centric approach, where edge features are encoded as additional attributes and then aggregated to atoms through message passing. However, in the field of chemistry, chemical bonds play a crucial role as they abstract distance-based interatomic forces and provide essential chemical priors in local structure modeling. Neglecting the consideration of edges in molecular generation can lead to the problem of atom-bond inconsistency, resulting in the generation of undesirable molecular structures, as demonstrated by Peng et al. (2023) and Qiang et al. (2023). In order to mitigate potential negative effects of atom-based modeling on our generative pre-training approach, in this section, we will provide a brief overview of the architecture of TokenGT and discuss a minor improvement that we propose to adapt it to 3D data. TokenGT TokenGT, short for Tokenized Graph Transformer, has been both theoretically and empirically shown to yield promising results in graph learning. It has been demonstrated that by incorporating augmented embeddings, standard Transformers can effectively handle graph data without requiring extensive graph-specific modifications (Kim et al., 2022). Given an input graph \( G = (V, E) \), TokenGT first initializes the node set \( V = \{v_1, ..., v_n\} \) and the edge set \( E = \{e_1, ..., e_m\} \) as \( X^V \in \mathbb{R}^{n \times d}, X^E \in \mathbb{R}^{m \times d} \). Then, each token in \( X \) is augmented with predefined orthonormal token identifiers to represent graph connectivity, and trainable type identifiers to encode whether a token is a node or an edge. Token Identifier Given an input graph \( G = (V, E) \), \( n \) node-wise orthonormal vectors \( P \in \mathbb{R}^{n \times d_p} \) are produced and concatenated after the token embeddings, i.e. for node \( v \in V \), the token \( X_v \) is augmented as \([X_v, P_v]\); for edge \((u, v) \in E \), the token \( X_{(u,v)} \) is augmented as \([X_{(u,v)}, P_u, P_v]\). With orthogonality, a Transformer can tell whether an edge \( e = (u, v) \) is connected with a node \( k \) through dot-product (attention) since \([P_u, P_v][P_k, P_k]^T = 1\) if and only if \( k \in (u, v) \) and 0 otherwise. Through this design, TokenGT is able to incorporate the connectivity between nodes and edges. For more theoretical analysis of completeness and informativeness of these token identifiers, please refer to the original paper. Type Identifier Given an input graph \( G = (V, E) \), TokenGT applies a trainable matrix \( E = [E^V; E^E] \in \mathbb{R}^{2 \times d_e} \) to augment the tokens as follows: for node \( v \in V \), the token \([X_v, P_v, E^V_v]\); for edge \((u, v) \in E \), the token \([X_{(u,v)}, P_u, P_v, E^E]\). With token identifiers and type identifiers, the initialized token embeddings \( X = [X^V \in \mathbb{R}^{n \times d}, X^E \in \mathbb{R}^{m \times d}] \in \mathbb{R}^{(n+m) \times (d+2d_p+d_e)} \) are augmented to \( X^{in} \in \mathbb{R}^{(n+m) \times (d+2d_p+d_e)} \). Then, TokenGT passes the input to a standard Transformer encoder with vanilla multi-head self-attention layers, where a \([\text{CLS}]\) token is additionally concatenated to obtain the graph embedding for downstream finetuning. 3D Extension To align with our geometric pretraining objectives, we propose a minor extension of the original 2D TokenGT formulation to accommodate 3D molecular graphs. Let \( G = (V, E, P) \) be a 3D graph, where \( P = \{p_1, ..., p_n\}, p_i \in \mathbb{R}^{n \times 3} \) is the set of atom cartesian coordinates, we augment the initial embedding \( X_{(u,v)} \) of edge \( e_{(u,v)} \) with bond length, bond angles, and the dihedral angles related to \( e_{(u,v)} \) with a radial/spherical harmonics basis function \( e_{\text{RBF}}/e_{\text{SBF}} \): - Bond length: \( X_{bl(u,v)} = e_{\text{RBF}}(\|p_v - p_u\|) \) - Bond angle: \( X_{ba(u,v)} = \sum_k a_{\text{SBF}}^{(uv,uk)}, k \in N(u) \setminus v \) - Dihedral angle: \( X_{da(u,v)} = \sum_{k,j} a_{\text{SBF}}^{(kuv,uvj)}, k \in N(u) \setminus v, j \in N(v) \setminus u \) - Augmented edge embedding: \( X_{3D(u,v)} = X_{(u,v)} + X_{bl(u,v)} + X_{ba(u,v)} + X_{da(u,v)} \) Algorithm 1 Local Structure Reconstruction in LEGO Require: - \( G \): Input graph \( G = (V, E, P) \) with \( n \) nodes and \( m \) edges. - \( M_{\text{center}} = \delta^n \), \( M_{\text{edge}} = \delta^m \), \( M_{\text{leaf}} = \delta^n \), \( \delta \in \{0, 1\} \): Mask indicators for center atoms, edges, leaf atoms. - \( \text{Emb}^{(n+m) \times \text{dim}} \): Embedding for tokens in \( G \) after a standard Transformer encoder. - \( \text{LEGOHead}_i, i \in \{1, 2, 3, 4\} \): Network module for reconstructing perturbed local structures. The four values of \( i \) correspond to global orientation of center atoms, edge length of edges, azimuthal angles of leaf nodes, and polar angles of leaf nodes, respectively. - Labels: Ground truth labels of the geometric elements: \( z, l, \theta, \phi \). - \( T \): Training Steps 1: while \( T \neq 0 \) do 2: Pad \( M_{\text{center}}, M_{\text{edge}}, M_{\text{leaf}} \) to size \([n + m, 1]\) 3: \( z_{\text{pred}} = \text{LEGOHead}_1(\text{Emb}[M_{\text{center}}]) \) 4: \( l_{\text{pred}} = \text{LEGOHead}_2(\text{Emb}[M_{\text{edge}}]) \) 5: \( \theta_{\text{pred}} = \text{LEGOHead}_3(\text{Emb}[M_{\text{leaf}}]) \) 6: \( \psi_{\text{pred}} = \text{LEGOHead}_4(\text{Emb}[M_{\text{leaf}}]) \) 7: Loss = \( w_{\text{distance}} \cdot \text{MSELoss}(\text{Labels}, z_{\text{pred}}, l_{\text{pred}}) + w_{\text{angle}} \cdot \text{VonMisesLoss}(\text{Labels}, \theta_{\text{pred}}, \psi_{\text{pred}}) \) 8: Optimise(Loss) 9: \( T = T - 1 \) 10: end while 3.3 Pretrain via Localized Geometric Generation At a high level, our method first segments the 3D molecular structure into non-overlapping, one-hop local structures. We then perturb a proportion of these units through a corruption strategy that masks token attributes and adds noise to node coordinates simultaneously. Subsequently, we reconstruct the perturbed local structures in a generative way by predicting their global orientation and local geometric arrangements. Figure 2 visualizes the workflow of our method. Local Structure Segmentation The core idea of local structure segmentation is to ensure none of the segmented results should be overlapped, that is to say, a leaf node in one local structure cannot be the center node in another local structure, but the overlapping of two leaf nodes is allowed. To elaborate, we first traverse the graph nodes in a BFS order \( \pi \), collect the non-terminal nodes as \( V_{\text{non-terminal}} \), and initialize a boolean tensor \( f_{\text{segmented}} = 0^T \). Then, we sample a node \( u \) from \( V_{\text{non-terminal}} \) to form a local structure, where we add \( u \) to \( V_{\text{seg-center}} \) and set the flags of its one-hop neighbors to true \( f_{\text{segmented}}[v] = \text{True}, v \in N(u) \). We then repeat the above operation until all the atoms in \( V_{\text{non-terminal}} \) have been segmented. Though our segmentation algorithm possesses randomness and may leave out terminal atoms at times, we see it as a way to increase the generalizability and robustness. By sampling different central nodes during segmentation, the model is encouraged to learn more holistic representations rather than relying on a fixed decomposition across multiple pretraining iterations. Regarding terminal atoms that are initially excluded from segmented units, they are likely to be eventually incorporated through successive iterations that segment their tetrahedron-like neighborhoods. Local Structure Perturbation Given the segmented result of a molecular graph \( V_{\text{seg-center}} \), we randomly perturb some local structures with ratio \( m_{LS} \) and get the set of masked centers \( V_{\text{mask-center}} \) and an indicator tensor \( M_{\text{center}} = \{0, 1\}^n \). Since we mask all the nodes and edges in the selected local structures, the mask ratio over all tokens (atoms and edges) \( m_{\text{token}} \) will be different from \( m_{LS} \), which statistical relationship between the two mask ratio is in displayed in Appendix A. Based on the masked centers, we can denote the rest of the perturbed local structures as \( E_{\text{mask-edge}} = \{(u, v) | u \text{ or } v \in V_{\text{mask-center}}\} \), and \( V_{\text{mask-leaf}} = \{v | (u, v) \in E_{\text{mask-edge}} \text{ for } u \in V_{\text{mask-center}}\} \), along with \( M_{\text{edge}} \in \{0, 1\}^m \) and \( M_{\text{leaf}} \in \{0, 1\}^n \). Then, we conduct perturbation by adding coordinate noise to atoms in \( V_{\text{mask-center}} \) and \( V_{\text{mask-leaf}} \), as well as masking the edge attributes in \( E_{\text{mask-edge}} \). Local Structure Reconstruction To successfully reconstruct the perturbed local structures, we must consider two critical aspects: the global orientation of the local structure within the entire molecule and the internal arrangements between nodes and edges within a local structure. Table 1: Results for biochemistry property prediction tasks. We compare our models with existing 2D or 3D molecular pretraining models. The best and second best results are **bold** and _underlined_. | model | Classification (ROC-AUC ↑) | Regression (MAE ↓) | |----------------|----------------------------|--------------------| | | BACE | BBBP | Clintox | SIDER | Tox21 | Freesolv | Esol | Lipo | | AttrMask | 84.5 | 68.7 | 72.6 | 62.7 | 78.1 | 2.764 | 1.100 | 0.739 | | GROVER | 81.0 | 69.5 | 76.2 | 65.4 | 68.2 | 2.272 | 0.895 | 0.823 | | MolCLR | 82.4 | 72.2 | 91.2 | 58.9 | 75.0 | 2.594 | 1.271 | 0.691 | | 3DInfomax | 79.4 | 69.1 | 9.4 | 53.3 | 74.4 | 2.337 | 0.894 | 0.695 | | GraphMVP | 81.2 | 72.4 | 79.1 | 63.9 | 75.9 | - | 1.029 | 0.681 | | GEM | 85.6 | 72.2 | 90.1 | 67.2 | 80.6 | 1.877 | 0.798 | 0.660 | | Uni-Mol | 85.6 | 72.4 | 91.9 | 65.9 | 79.6 | 1.620 | 0.788 | 0.603 | | 3D PGT | 80.9 | 72.1 | 79.4 | 60.6 | 73.8 | - | 1.061 | 0.687 | | LEGO | 81.9 | 74.2 | 94.3 | 72.3 | 83.9 | 1.844 | 0.704 | 0.804 | Regarding spatial orientation, we predict the spherical coordinates of central atoms within masked local structures. These coordinates indicate where to position each unit within the overall molecule and its orientation relative to other units. For internal geometry, the previously predicted central atom serves as the origin of a spherical coordinate system (SCS). We then predict the radial distance ($r$, edge length), azimuthal angle ($\theta$), and polar angle ($\psi$) of each masked peripheral atom within this SCS. Edge lengths are directly predicted as they closely relate to bond type. Meanwhile, angular values guide subsequent reconstruction of three-dimensional coordinates for the peripheral atoms. The procedure of the local structure reconstruction of our method is summarized in Algorithm 1. We use Mean Squared Error as the loss function for edge length and radius, and adopt the von Mises-Fisher Loss to train angle-related terms. ## 4 EXPERIMENTS ### 4.1 DATASETS AND EXPERIMENTAL SETUP **Pre-training.** We pretrain LEGO on OGB-PCQM4Mv2 dataset [Hu et al., 2021], which contains 3D molecular structures simulated by density functional theory (DFT). The dataset has 3.38 million molecules, each with one dominant equilibrium conformation. While considering multiple conformations can describe 3D molecular structures more comprehensively and improve representability (Liu et al., 2021a; Stärk et al., 2022), we believe that learning molecular semantics from the dominant conformation is sufficient to validate our method. Handling multiple conformations is left for future work. We follow the Transformer encoder configuration from the original TokenGT base model: 12 layers, 768 embedding dimension, 32 attention heads and use Graph Laplacian as the node identifier. We mask $m_{LS}=10\%$ of the local structures and set the noise scale on coordinate noise to 0.3. The weights for distance loss $w_{distance}$ and angle loss $w_{angle}$ are both set to 1. We use AdamW optimizer with $(\beta_1, \beta_2) = (0.99, 0.999)$ and a weight decay of 0.1. We apply the polynomial learning rate scheduler, with a peak learning rate of $2e^{-4}$ and 150k warm-up steps over 1M iteration with a batch size 256. The model is pretrained on 8 NVIDIA A100s for 300 epochs. **Fine-tuning.** We use the $[\text{CLS}]$ token as the graph representation for downstream finetuning and pass it through a two-layer MLP projection head for task predictions. We evaluate the pretrained model on biochemical and quantum molecular properties. Biochemical properties test how well the model captures semantics from the segmented units within a molecule, while quantum properties test the model’s ability to represent 3D structures in terms of interatomic interactions. For biochemical properties, we choose the widely-used benchmark MoleculeNet [Wu et al., 2018], where the related tasks can be categorized into physical chemistry, biophysics, and physiology. The original MoleculeNet dataset contains only 2D data and existing 3D pretraining baselines take 2D graph as input as well. We follow this setting to demonstrate the transferability of our pretrained model. Table 2: Results on PCQM4Mv2 validation set in OGB Large-Scale Challenge [Hu et al., 2021]. The results are evaluated by Mean Absolute Error (MAE). The best and second best results are **bold**. | model | #param. | Valid MAE (↓) | |------------------------|---------|---------------| | GraphGPS<sub>BASE</sub> (Rampášek et al., 2022) | 6.2M | 0.0938 | | GRPE<sub>BASE</sub> (Park et al., 2022) | 46.2M | 0.0890 | | EGT (Hussain et al., 2022) | 89.3M | 0.0869 | | GRPE<sub>LARGE</sub> (Park et al., 2022) | 46.2M | 0.0867 | | Graphormer (Ying et al., 2021) | 47.1M | 0.0864 | | GraphGPS<sub>BASE</sub> (Rampášek et al., 2022) | 19.4M | 0.0858 | | GraphGPS<sub>NEED</sub> (Rampášek et al., 2022) | 13.8M | 0.0852 | | GEM-2 (Liu et al., 2022a) | 32.1M | 0.0793 | | Transformer-M (Luo et al., 2022) | 47.1M | 0.0787 | | GPS++<sub>BASE</sub> (Masters et al., 2022) | 44.3M | 0.0778 | | 3D GPT (Wang et al., 2023) | 42.6M | **0.0762** | | TokenGT (Kim et al., 2022) | 48.5M | 0.0910 | | LEGO (ours) | 52.7M | 0.0817 | Following previous works [Zhu et al., 2022; Fang et al., 2022], the datasets are split according to their molecular scaffolds by 8:1:1. We use Bayesian search to find the best hyper-parameter combination with a maximum trials of 64. For quantum properties, we choose the OGBLSC-PCQM4Mv2 [Hu et al., 2021] as the benchmark. Given 3D molecular structures, the task requires the model to predict the HOMO-LUMO gap of the molecules, an important quantum property that has been shown to closely correlate with macro molecular properties. Since the test set is not open-sourced, we report the validation MAE as the result as most methods do. **Baselines.** For MoleculeNet, we mainly compare LEGO with existing state-of-the-art 3D-based pretrained models in [Stark et al., 2022; Liu et al., 2021a; Fang et al., 2022; Zhu et al., 2022]. We also select three typical pretraining models on 2D graphs in order to illustrate the effectiveness of leveraging 3D geometry information: AttrMask [Hu et al., 2019], GROVER [Rong et al., 2020], and GraphCLR [You et al., 2020]. In terms of quantum property prediction, our baselines cover the currently SOTA methods, including GraphGPS [Rampášek et al., 2022], GRPE [Park et al., 2022], EGT [Hussain et al., 2022], Graphormer [Ying et al., 2021], Transformer-M [Luo et al., 2022], GPS++ [Masters et al., 2022] and 3D-GPT [Wang et al., 2023]. ### 4.2 Main Experimental Results In this section, we evaluate our pretrained model on the two property prediction tasks and analyze what improvement the model can obtain via our structured pretraining. For biochemical properties, we achieve state-of-the-art results on 5 out of 8 tasks and comparable performance on 2 additional tasks (Table 1). Specifically, LEGO demonstrates significantly improved performance on predicting physiological properties like toxicity, indicating that our method can effectively capture functional semantics in molecular structures. LEGO also achieves strong results on tasks such as Freesolv and Esol, which are related to the properties of molecules in a water environment. However, it underperforms on Lipo, which is related to a lipid environment. This difference in transfer learning may be due to the significant difference between the conformations molecules exhibit in a lipid environment and the equilibrium conformations used in our pretraining. Again, these results validate our motivation that exploiting functional semantics through proper segmentation of molecular structures is vital. Table 2 exhibits the validation results on PCQM4M-v2 for quantum property prediction. As shown in the table, although LEGO boosts the performance with 10.2% over the non-pretrained TokenGT, it lags behind the state-of-the-art result. However, we would like to argue this is because all the other baselines are introducing complicated graph-specific encodings into the model, while we utilize a... pure transformer backbone. The primary contribution of this work is to give a glimpse at how proper selection of semantic units impacts 3D molecular pretraining, and we believe a further introduction of graph inductive bias will further improve our result. 4.3 Ablation Studies In this section, we ablate key design elements of the proposed LEGO pretraining paradigm. Mask Ratio and Noise Scale In Zaidi et al. (2022) and Feng et al. (2023), the authors point out that in molecular denoising pretraining, excessive noise often leads to training divergence and detrimental impacts. Will this conclusion still hold on our structured pretraining? The ablation results in Table 3 give a positive answer. From the table, we observe decreased performance on PCQM4M-v2 as the mask ratio and noise scale parameters for local structure (LS) perturbation are increased. We attribute this trend to greater difficulty in reconstructing the original data when more extensive corruption is introduced across larger molecular fractions during pre-training. Specifically, higher mask ratios lead to a greater number of perturbed local structures, while larger noise scales further distort the original topology of the units. With excessive corruption, preserving original structural semantics for reconstruction becomes more challenging, limiting gains from the pre-training phase for downstream transfer. Random vs Structured To ablate the effect of our structured design in pretraining, we adopt a random masking on atoms with \( m_{\text{atom}} = 0.36 \), which corresponds to its structured counterpart \( m_{\text{LS}} = 0.1 \). Table 4 demonstrate that naive atomic-level noise leads to inferior performance compared to LEGO’s incorporation of structural semantics during perturbation and reconstruction, quantifying the consequent gains of a chemistry-aware, structure-based procedure for molecular representation enhancement through self-supervised objectives. 5 Conclusion In this paper, we propose a novel approach for self-supervised learning on 3D molecular structures. By treating tetrahedrons within 3D molecular structures as fundamental building blocks, we implement structured denoising to capture both local and global features. We also address the atom-bond inconsistency problem by explicitly modeling edges in molecular graph. Through pretraining, our approach achieves competitive results on both biochemical and quantum molecule property prediction tasks. In the future, we aim to investigate integrating additional graph inductive biases into the model while retaining explicit edge representations. Furthermore, we plan to validate the proposed segmentation strategy across a broader range of molecular structures and explore alternate perturbation techniques. ### Table 3: Ablation results on PCQM4M-v2 for different \( m_{\text{LS}} \) and noise scales. | \( m_{\text{LS}} \) | noise scale | equivalent \( m_{\text{atom}} \) | Valid MAE | |-------------------|-------------|-------------------------------|-----------| | 0.1 | 0.3 | 0.36 | **0.0817**| | 0.1 | 1.0 | 0.36 | 0.0862 | | 0.15 | 0.3 | 0.57 | 0.0877 | | 0.2 | 0.3 | 0.77 | 0.0885 | ### Table 4: Comparison for random and structured pretraining on PCQM4M-v2. | Model | Valid MAE | |------------------------|-----------| | LEGO | **0.0817**| | randomly perturbed | 0.0883 | REFERENCES Simon Batzner, Albert Musaelian, Lixin Sun, Mario Geiger, Jonathan P Mailoa, Mordechai Kornbluth, Nicola Molinari, Tess E Smidt, and Boris Kozinsky. E (3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials. *Nature communications*, 13(1):2453, 2022. Stefan Chmiela, Valentin Vassilev-Galindo, Oliver T Unke, Adil Kabylda, Huziel E Sauceda, Alexandre Tkatchenko, and Klaus-Robert Müller. Accurate global machine learning force fields for molecules with hundreds of atoms. *Science Advances*, 9(2):eadf0873, 2023. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)*, pp. 4171–4186, 2019. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020. Vijay Prakash Dwivedi and Xavier Bresson. A generalization of transformer networks to graphs. *arXiv preprint arXiv:2012.09699*, 2020. Xiaomin Fang, Lihang Liu, Jieqiong Lei, Donglong He, Shanzhuo Zhang, Jingbo Zhou, Fan Wang, Hua Wu, and Haifeng Wang. Chemrl-gem: Geometry enhanced molecular representation learning for property prediction. *arXiv preprint arXiv:2106.06130*, 2021. Xiaomin Fang, Lihang Liu, Jieqiong Lei, Donglong He, Shanzhuo Zhang, Jingbo Zhou, Fan Wang, Hua Wu, and Haifeng Wang. Geometry-enhanced molecular representation learning for property prediction. *Nature Machine Intelligence*, 4(2):127–134, 2022. Shikun Feng, Yuyan Ni, Yanyan Lan, Zhi-Ming Ma, and Wei-Ying Ma. Fractional denoising for 3d molecular pre-training. In *International Conference on Machine Learning*, pp. 9938–9961. PMLR, 2023. Fabian Fuchs, Daniel Worrall, Volker Fischer, and Max Welling. Se (3)-transformers: 3d roto-translation equivariant attention networks. *Advances in Neural Information Processing Systems*, 33:1970–1981, 2020. Jiaqi Han, Yu Rong, Tingyang Xu, and Wenbing Huang. Geometrically equivariant graph neural networks: A survey. *arXiv preprint arXiv:2202.07230*, 2022. Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, and Jure Leskovec. Strategies for pre-training graph neural networks. *arXiv preprint arXiv:1905.12265*, 2019. Weihua Hu, Matthias Fey, Hongyu Ren, Maho Nakata, Yuxiao Dong, and Jure Leskovec. Ogb-lsc: A large-scale challenge for machine learning on graphs. *arXiv preprint arXiv:2103.09430*, 2021. Md Shamim Hussain, Mohammed J Zaki, and Dharmashankar Subramanian. Global self-attention as a replacement for graph convolution. In *Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining*, pp. 655–665, 2022. Rui Jiao, Jiaqi Han, Wenbing Huang, Yu Rong, and Yang Liu. Energy-motivated equivariant pre-training for 3d molecular graphs. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 37, pp. 8096–8104, 2023. Jinwoo Kim, Tien Dat Nguyen, Seonwoo Min, Sungjun Cho, Moontae Lee, Honglak Lee, and Seunghoon Hong. Pure transformers are powerful graph learners. *arXiv preprint arXiv:2207.02505*, 2022. Johannes Klippera, Janek Groß, and Stephan Günnemann. Directional message passing for molecular graphs. *arXiv preprint arXiv:2003.03123*, 2020.
s6bKLlF4Pe
I am doubtful about the significance of convergence results. The convergence result with GPI follows the same rate as the convergence rate without GPI. It is hard to tell directly what is the difference in the constants. Having a thorough discussion with some examples would serve to give readers a better understanding of the upper bound.
Proviable Knowledge Transfer using Successor Feature for Deep Reinforcement Learning Anonymous authors Paper under double-blind review Abstract This paper studies the transfer reinforcement learning (RL) problem where multiple RL problems have different reward functions but share the same underlying transition dynamics. In this setting, the Q-function of each RL problem (a.k.a. a task) can be decomposed into a successor feature (SF) and a reward mapping: the former characterizes the transition dynamics, and the latter characterizes the task-specific reward function. This Q-function decomposition, coupled with a policy improvement operator known as generalized policy improvement (GPI), reduces the search space of finding the optimal Q-function, and the SF & GPI framework exhibits promising empirical performance compared to traditional RL methods like Q-learning. However, its theoretical foundations remain largely unestablished, especially when learning successor features using deep neural networks (SFs-DQN). This paper studies the provable knowledge transfer using SFs-DQN in transfer RL problems. We establish the first convergence analysis with provable generalization guarantees for SF-DQN with GPI. The theory reveals that SF-DQN with GPI outperforms conventional RL approaches, such as deep Q-network, in terms of both faster convergence rate and better generalization. Numerical experiments on real and synthetic RL tasks support the superior performance of SF-DQN & GPI, quantitatively aligning with our theoretical findings. 1 Introduction In reinforcement learning (RL), the goal is to train an agent to perform a task within an environment in a desirable manner by allowing the agent to interact with the environment. Here, the agent is guided towards the desirable behavior by the rewards, and the optimal policy is derived from a learned value function (Q-function) in selecting the best actions to maximize the immediate and future rewards. This framework can effectively capture a wide array of real-world applications, such as gaming (Mnih et al., 2013; Silver et al., 2017), robotics (Kalashnikov et al., 2018), autonomous vehicles (Shalev-Shwartz et al., 2016; Schwarting et al., 2018), healthcare (Coronato et al., 2020), and natural language processing (Tenney et al., 2018). However, RL agents require a significant amount of interactions with the environment to tackle complex tasks, especially when RL is equipped with deep neural networks (DNNs). For example, AlphaGo (Silver et al., 2017) required 29 million matches and 5000 TPUs at a cost exceeding $35 million, which is time-consuming and memory-intensive. Nevertheless, many complex real-world problems can naturally decompose into multiple interrelated sub-problems, all sharing the same environmental dynamics (Sutton et al., 1999; Bacon et al., 2017; Kulkarni et al., 2016a). In such scenarios, it becomes highly advantageous for an agent to harness knowledge acquired from previous tasks to enhance its performance in tackling new but related challenges. This practice of leveraging knowledge from one task to improve performance in others is known as transfer learning (Lazaric, 2012; Taylor & Stone, 2009; Barreto et al., 2017). This paper focuses on an RL setting with learning multiple tasks, where each task is associated with a different reward function but shares the same environment. This setting naturally arises in many real-world applications such as robotics (Yu et al., 2020). We consider exploring the knowledge transfer among multiple tasks via the successor feature (SF) framework (Barreto et al., 2017) which disentangles the environment dynamic from the reward function at an incremental computational cost. The SF framework is derived from successor representation (SR) (Dayan, 1993) by introducing the value function approximation. Specifically, SR (Dayan, 1993) decouples the value function into a future state occupancy measure and a reward mapping. Here, the future state occupancy... characterizes the transition dynamics of the environment, and the reward mapping characterizes the reward function of the task. SF is a natural application of SR in solving value function approximation. Furthermore, Barreto et al. (2017) propose a generalization of the classic policy improvement, termed generalized policy improvement (GPI), enabling smooth knowledge transfer across learned policies. In contrast to traditional policy improvement, which typically considers only a single policy, Generalized Policy Improvement (GPI) operates by maintaining a set of policies, each associated with a distinct skill the agent has acquired. This approach enables the agent to switch among these policies based on the current state or task requirements, providing a flexible and adaptive framework for decision-making. Empirical findings presented in (Barreto et al., 2017) highlight the superior transfer performance of SF & GPI in deep RL when compared to conventional methods like Deep Q-Networks (DQNs). Subsequent works further justified the improved performance of SF in subgoal identification (Kulkarni et al., 2016b) and real-world robot navigation (Zhang et al., 2017). While performance guarantees of SF-based learning are provided in the simple tabular setting (Barreto et al., 2017; 2018), less is known for such approaches in the widely used function approximation setting. In this context, this paper aims to close this gap by providing theoretical guarantees for SF learning in the context of DNNs. Our objective is to explore the convergence and generalization analysis of SF when paired with DNN approximation. We also seek to delineate the conditions under which SF learning can offer more effective knowledge transfer among tasks when contrasted with classical deep reinforcement learning (DRL) approaches, e.g., DQN (Mnih et al., 2013). Contributions. This paper presents the first convergence analysis with generalization guarantees for successor feature learning with deep neural network approximation (SF-DQN). This paper focuses on estimating the optimal Q-value function through the successor feature decomposition, where the successor feature decomposition component is approximated through a deep neural network. The paper offers a comprehensive analysis of the convergence of deep Q-networks with successor feature decomposition and provides insights into the improved performance of the learned Q-value function derived from successor feature decomposition. The key contributions of this study are as follows: C1. The convergence analysis of the proposed SF-DQN to the optimal Q-function with generalization guarantees. By decomposing the reward into a linear combination of the transition feature and reward mapping, we demonstrate that the optimal Q-function can be learned by alternately updating the reward mapping and the successor feature using the collected data in online RL. This learned Q-function converges to the optimal Q-function with generalization guarantees at a rate of $1/T$, where $T$ is the number of iterations in updating transition features and reward mappings. C2. The theoretical characterization of enhanced performance by leveraging knowledge from previous tasks through GPI. This paper characterizes the convergence rate with generalization guarantees in transfer RL utilizing GPI. The convergence rate accelerates following the degree of correlation between the source and target tasks. C3. The theoretical characterization of the superior transfer learning performance with SF-DQN over non-representation learning approach DQNs. This paper quantifies the transfer learning ability of SF-DQN and DQN algorithms by evaluating their generalization error when transferring knowledge from one task to another. Our results indicate that SF-DQN achieves improved generalization compared to DQN, demonstrating the superiority of SF-DQN in transfer RL. 1.1 RELATED WORKS Successor features in RL. In the pioneering work, (Dayan, 1993) introduced the concept of SR, demonstrating that the value function can be decomposed into a reward mapping and a state representation that measures the future state occupancy from a given state, with learning feasibility proof in tabular settings. Subsequently, (Barreto et al., 2017) extended SR from three perspectives: (1) the feature domain of SR is extended from states to state-action pairs, known as SF; (2) DNNs are deployed as function approximators to represent the SF and reward mappings; (3) GPI algorithm is introduced to accelerate policy transfer for multi-tasks. (Barreto et al., 2017; 2018) provided transfer guarantees for Q-learning with SF and GPI in the tabular setting. Furthermore, (Kulkarni et al., 2016b; Zhang et al., 2017) apply SF learning with DNN-based schemes to subgoal identification (Kulkarni et al., 2016b) and robot navigation (Zhang et al., 2017). A comprehensive RL transfer comparison using SF under different assumptions can be found in (Zhu et al., 2023). RL with neural networks. Recent advancements in RL with neural network approximation mainly include the Bellman Eluder dimension (Jiang et al., 2017; Russo & Van Roy, 2013), Neural Tangent Kernel (NTK) (Yang et al., 2020; Cai et al., 2019; Xu & Gu, 2020; Du et al., 2020), and Besov regularity (Suzuki, 2019; Ji et al., 2022; Nguyen-Tang et al., 2022). However, each of these frameworks has its own limitations. The Eluder dimension exhibits exponential growth even for shallow neural networks (Dong et al., 2021), making it challenging to characterize sample complexity in real-world applications of DRL. The NTK framework linearizes DNNs to bypass the non-convexity derived from the non-linear activation function in neural networks. Nevertheless, it requires using computationally inefficient, extremely wide neural networks (Yang et al., 2020). Moreover, the NTK approach falls short in explaining the advantages of utilizing non-linear neural networks over linear function approximation (Liu et al., 2022; Fan et al., 2020). The Besov space framework (Ji et al., 2022; Nguyen-Tang et al., 2022; Liu et al., 2022; Fan et al., 2020) requires sparsity on neural networks and makes the impractical assumption that the algorithm can effectively identify the global optimum, which is unfeasible for non-convex objective functions involving neural networks. **Theory of generalization in deep learning.** The theory of generalization in deep learning has been extensively developed in supervised learning, where labeled data is available throughout training. Generalization in learned models necessitates low training error and small generalization gap. However, in DNNs, training errors and generalization gaps are analyzed separately due to their non-convex nature. To ensure bounded generalization, it is common to focus on one-hidden-layer neural networks (Safran & Shamir, 2018) in convergence analysis. Existing theoretical analysis tools in supervised learning with generalization guarantees draw heavily from various frameworks, including the Neural Tangent Kernel (NTK) framework (Jacot et al., 2018; Du et al., 2018; Lee et al., 2018), model recovery techniques (Zhong et al., 2017; Ge et al., 2018; Bakshi et al., 2019; Soltanolkotabi et al., 2018; Zhang et al., 2020), and the analysis of structured data (Li & Liang, 2018; Shi et al., 2022; Brutzkus & Globerson, 2021; Allen-Zhu & Li, 2022; Karp et al., 2021; Wen & Li, 2021). ## 2 Preliminaries In this paper, we address the learning problem involving multiple tasks \( \{T_i\}_{i=1}^n \) and aim to find the optimal policy \( \pi_i^* \) for each task \( T_i \). We begin by presenting the preliminaries for a single task and then elaborate on our algorithm for learning with multiple tasks in the following section. ### Markov decision process and Q-learning. The Markov decision process (MDP) is defined as a tuple \((S, A, P, r, \gamma)\), where \( S \) is the state space and \( A \) is the set of possible actions. The transition operator \( P : S \times A \rightarrow \Delta(S) \) gives the probability of transitioning from the current state \( s \) and action \( a \) to the next state \( s' \). The function \( r : S \times A \times S \rightarrow [-R_{\text{max}}, R_{\text{max}}] \) measures the reward for a given state-action pair. The discount factor \( \gamma \in [0, 1) \) determines the significance of future rewards. For the \( i \)-th task, the goal of the agent is to find the optimal policy \( \pi_i^* \) with \( a_t = \pi_i^*(s_t) \) at each time step \( t \). The aim is to maximize the expected discounted sum of reward as \( \sum_{t=0}^{\infty} \gamma^t r_i(s_t, a_t, s_{t+1}) \), where \( r_i \) denotes the reward function for the \( i \)-th task. For any state-action pair \((s, a)\), we define the action-value function \( Q_i^\pi \) given a policy \( \pi \) as \[ Q_i^\pi(s, a) = \mathbb{E}_{\pi, P} \left[ \sum_{t=0}^{\infty} \gamma^t r_i(s_t, a_t, s_{t+1}) \mid s_0 = s, a_0 = a \right]. \] Then, the optimal \( Q \)-function, denoted as \( Q_i^* \) or \( Q_i^{*\pi} \), satisfies \[ Q_i^*(s, a) := \max_\pi Q_i^\pi(s, a) = \mathbb{E}_{s', a, r_i} r_i(s, a, s') + \gamma \max_{a'} Q_i^{*\pi}(s', a'), \] where (2) is also known as the Bellman equation. Through the optimal action-value function \( Q_i^* \), the agent can derive the optimal policy (Watkins & Dayan, 1992; Sutton & Barto, 2018) following \[ \pi_i^*(s) = \arg\max_a Q_i^*(s, a). \] ### Deep Q-networks (DQNs). The DQN utilizes a DNN parameterized with weights \( \omega \), denoted as \( Q_i(s, a; \omega) : \mathbb{R}^d \rightarrow \mathbb{R} \) for the \( i \)-th task, to approximate the optimal Q-value function \( Q_i^* \) in (2). Specifically, given input feature \( x := x(s, a) \), the output of the \( L \)-hidden-layer DNN is defined as \[ Q_i(s, a; \omega) := \omega_{L+1}^\top / K \cdot \sigma(\omega_L^\top \cdots \sigma(\omega_1^\top x)), \] where \( x = x(s, a) \) and \( \sigma(\cdot) \) is the ReLU activation function, i.e., \( \sigma(z) = \max\{0, z\} \). ### Successor feature. For the \( i \)-th task, suppose the expected one-step reward associated with the transition \((s, a, s')\) can be computed as \[ r_i(s, a, s') = \phi(s, a, s')^\top w_i^*, \quad \text{with} \quad \phi, w_i^* \in \mathbb{R}^d, \] where $\phi$ remains the same for all the task. With the reward function in (5), the Q-value function in (1) can be rewritten as $$Q^\pi(s, a) = \mathbb{E}_{\pi, P} \left[ \sum_{t=0}^{\infty} \gamma^t \phi(s_t, a_t, s_{t+1}) \mid (s_0, a_0) \right]^\top w_i^\ast := \psi^\pi_i(s, a)^\top w_i^\ast.$$ (6) Then, the optimal Q function satisfies $$Q^\ast_i(s, a) = \mathbb{E}_{\pi^\ast, P} \left[ \sum_{t=0}^{\infty} \gamma^t \phi(s_t, a_t, s_{t+1}) \mid (s_0, a_0) \right]^\top w_i^\ast := \psi^\ast_i(s, a)^\top w_i^\ast.$$ (7) 3 PROBLEM FORMULATION AND ALGORITHM Problem formulation. Without loss of generality, the data is assumed to be collected from the tasks in the order of $\mathcal{T}_1$ to $\mathcal{T}_n$ during the learning process. The goal is to utilize the collected data for each task, e.g., $\mathcal{T}_j$, and the learned knowledge from previous tasks $\{\mathcal{T}_i\}_{i=1}^{j-1}$ to derive the optimal policy $\pi^\ast_j$ for $\mathcal{T}_j$. These tasks share the same environment dynamic but the reward function changes across the task as shown in (5). For each task $\mathcal{T}_i$, we denote its reward as $$r_i = \phi \cdot w_i^\ast,$$ with $\|\phi\|_2 \leq \phi_{\text{max}},$ (8) where $\phi$ is the transition feature across all the tasks and $w_i^\ast$ is the reward mapping. From (7), the learning of optimal Q-function for the $i$-th task is decomposed as two sub-tasks: learning SF $\psi^\ast_i(s, a)$ and learning reward $w_i^\ast$. Reward mapping. To find the optimal $w_i^\ast$, we utilize the information from $\phi(s, a, s')$ and $r_i(s, a, s')$. The value of $w_i^\ast$ can be obtained by solving the optimization problem $$\min_{w_i} \| r_i - \phi \cdot w_i \|_2.$$ (9) Successor features. We use $\psi^\pi_i$ to denote the successor feature for the $i$-th task, and $\psi^\pi_i$ satisfies $$\psi^\pi_i(s, a) = \mathbb{E}_{s' \mid s, a} \phi(s, a, s') + \gamma \cdot \psi^\pi_i(s', \pi(s')).$$ (10) The expression given by (10) aligns perfectly with the Bellman equation in (2), where $\phi$ acts as the reward. Therefore, following DQNs, we utilize a function $\psi(s, a)$ parameterized using the DNN as $$\psi_i(\Theta_i; s, a) = H(\Theta_i; x(s, a)),$$ (11) where $x : S \times A \rightarrow \mathbb{R}^d$ is the feature mapping of the state-action pair. Without loss of generality, we assume $|x(s, a)| \leq 1$. Then, find $\psi^\ast$ is to minimize the mean squared Bellman error (MSBE) $$\min_{\Theta_i} f(\Theta_i) := \mathbb{E}_{(s, a) \sim \pi^\ast} \left[ \mathbb{E}_{s' \mid s, a} \psi_i(\Theta_i; s, a) - \phi(s, a, s') - \gamma \cdot \psi_i(\Theta_i; s', \pi^\ast(s')) \right]^2.$$ (12) It is worth mentioning that although (12) and (9) appear to be independent of each other, the update of $w_i$ does affect the update of $\psi_i$ through the shift in data distribution. The collected data is estimated based on the policy depending on the current estimated values of $\psi_i$ and $w_i$, which shifts the distribution of the collected data away from $\pi^\ast_i$. This, in turn, leads to a bias depending on the value of $w_i$ in the calculation of the gradient of $\Theta_i$ in minimizing (12). Generalized policy improvement (GPI). Suppose we have acquired knowledge about the optimal successor features for the previous $n$ tasks, and we use $\hat{\psi}_i$ to denote the estimated successor feature function for the $i$-th task. Now, let’s consider a new task $\mathcal{T}_{n+1}$ with the reward function defined as $r_{n+1} = \phi w_{n+1}^\ast$. Instead of training from scratch, we can leverage the knowledge acquired from previous tasks to improve our approach. We achieve this by deriving the policy follows $$\pi(a | s) = \arg\max_a \max_{1 \leq i \leq n+1} \hat{\psi}_i(s, a)^\top w_{n+1}^\ast.$$ (13) This strategy tends to yield better performance than relying solely on $\hat{\psi}_{n+1}(s, a)^\top w_{n+1}^\ast$, especially when $\hat{\psi}_{n+1}$ has not yet converged to the optimal successor feature $\psi^\ast_{n+1}$ during the early learning stage, while some task is closely related to the new tasks, i.e., some $w_i^\ast$ is close to $w_{n+1}^\ast$. This policy improvement operator is derived from Bellman’s policy improvement theorem (Bertsekas & Tsitsiklis, 1996) and (2). When the reward is fixed across different policies, e.g., $\{\pi_i\}_{i=1}^n$, and given that the optimal Q-function represents the maximum across the entire policy space, the maximum of multiple Q-functions corresponding to different policies, $\max_{1 \leq i \leq n} Q^{\pi_i}$, is expected to be closer to $Q^\ast$ than any individual Q-function, $Q^{\pi_i}$. In this paper, the parameter $\phi$ in learning the successor feature is analogous to the reward in learning the Q-function. As $\phi$ remains the same for different tasks, this analogy has inspired the utilization of GPI in our setting, even where the rewards change. 3.1 Successor Feature Deep Q-Network The goal is to find \( w_i \) and \( \Theta_i \) by solving the optimization problems in (9) and (12) for each task sequentially, and the optimization problems are solved by mini-batch stochastic gradient descent (mini-batch SGD). Algorithm 1 contains two loops, and the outer loop number \( n \) is the number of tasks and inner loop number \( T \) is the maximum number of iterations in solving (9) and (12) for each task. At the beginning, we initialize the parameters as \( \Theta^{(0)}_i \) and \( w^{(0)}_i \) for task \( i \) with \( 1 \leq i \leq n \). In \( t \)-th inner loop for the \( i \)-th task, let \( s_t \) be the current state, and \( \theta_c \) be the learned weights for task \( c \). The agent selects and executes actions according to \[ a = \pi_\beta(\max_{c \in [i]} \psi(\Theta_c; s_t, a)^\top w^{(t)}_i), \] where \( \pi_\beta(Q(s_t, a)) \) is the policy operator based on the function \( Q(s_t, a) \), e.g., greedy, \( \varepsilon \)-greedy, and softmax. For example, if \( \pi_\beta(\cdot) \) stands for greedy policy, then \( a = \arg \max_a \max_{c \in [i]} \psi(\Theta_c; s_t, a)^\top w^{(t)}_i \). The collected data are stored in a replay buffer with size \( N \). Then, we sample a mini-batch of samples from the replay buffer and denote the samples as \( D_t \). **Algorithm 1** Successor Feature Deep Q-Network (SF-DQN) 1: **Input:** Number of iterations \( T \), and experience replay buffer size \( N \), step size \( \{\eta_t, \kappa_t\}_{t=1}^T \). 2: Initialize \( \{\Theta^{(0)}_i\}_{i=1}^n \) and \( \{w^{(0)}_i\}_{i=1}^n \). 3: **for** Task \( i = 1, 2, \ldots, n \) **do** 4: **for** \( t = 0, 1, 2, \ldots, T - 1 \) **do** 5: Collect data and store in the experience replay buffer \( D_t \) following a behavior policy \( \pi_t \) in (14). 6: Perform gradient descent steps on \( \Theta^{(t)}_i \) and \( w^{(t)}_i \) following (15). 7: **end for** 8: Return \( Q_i = \psi_i(\Theta^{(T)}_i)^\top w^{(T)}_i \) for \( i = 1, 2, \ldots, n \). 9: **end for** Next, we update the current weights using a mini-batch gradient descent algorithm following \[ w^{(t+1)} = w^{(t)} - \kappa_t \cdot \sum_{m \in D_t} \left( \phi(s_m, a_m, s'_m)^\top w^{(t)} - r(s_m, a_m, s'_m) \right) \cdot \phi(s_m, a_m, s'_m) \] \[ \Theta^{(t+1)}_i = \Theta^{(t)}_i - \eta_t \cdot \sum_{m \in D_t} \left( \psi(\Theta^{(t)}_i; s_m, a_m) - \phi(s_m, a_m, s'_m) - \gamma \cdot \psi(\Theta^{(t)}_i; s'_m, a') \right) \] \[ \cdot \nabla_{\Theta_i} \psi(\Theta^{(t)}_i; s_m, a_m), \] where \( \eta_t \) and \( \kappa_t \) are the step sizes, and \( a' = \arg \max_a \max_{c \in [i]} \psi(\Theta_c; s'_m, a)^\top w^{(t)}_i \). The gradient for \( \Theta^{(t)}_i \) in (15) can be viewed as the gradient of \[ \sum_{(s_m, a_m) \sim D_t} (\psi_i(\Theta_i; s, a) - \phi - \mathbb{E}_{s'|s,a} \max_{a'} \psi_i(\Theta^{(t)}_i; s', a'))^2, \] which is the approximation to (12) via replacing \( \max_{a'} \psi_i^* \) with \( \max_{a'} \psi_i(\Theta^{(t)}_i) \). 4 THEORETICAL RESULTS 4.1 Summary of Major Theoretical Findings To the best of our knowledge, our results in Section 4.3 provide the first theoretical characterization for SF-DQN with GPI, including a comparison with the conventional Q-learning under commonly used assumptions. Before formally presenting them, we summarize the highlights as follows. | Notation | Description | |----------|-------------| | \( K \) | Number of neurons in the hidden layer. | | \( L \) | Number of the hidden layers. | | \( d \) | Dimension of the feature mapping of \((s, a)\). | | \( T \) | Number of iterations. | | \( \Theta^*_i, w^*_i \) | The global optimal to (12) and (9) for \( i \)-th task. | | \( N \) | Replay buffer size. | | \( p_1 \) | The smallest eigenvalue of \( \mathbb{E}[\psi_i(\Theta^*_i)^\top \nabla \psi_i(\Theta^*_i)] \). | | \( p_2 \) | The smallest eigenvalue of \( \mathbb{E}[\phi(s, a) \phi(s, a)^\top] \). | | \( q \) | A variable indicates the relevance between current and previous tasks. | | \( C^* \) | A constant related to the distribution shift between the behavior and optimal policies. | **(T1)** Leaned Q-function converges to the optimal Q-function at a rate of \( 1/T \) with generalization guarantees. We demonstrate that the learned parameters \( \Theta^{(T)}_i \) and \( w^{(T)}_i \) converge towards their respective ground truths, \( \Theta^*_i \) and \( w^*_i \), indicating that SF-DQN converges to optimal Q-function at a rate of \( 1/T \) as depicted in (23) (Theorem 1). Moreover, the generalization error of the learned Q-function scales on the order of $\frac{\|w^{(0)} - w^*\|_2}{1-\gamma - \Omega(N^{-1/2}) - \Omega(C^*)} \cdot \frac{1}{T}$. By employing a large replay buffer $N$, minimizing the data distribution shift factor $C^*$, and improving the estimation of task-specific reward weights $w^{(0)}$, we can achieve a lower generalization error. (T2) GPI enhances the generalization of the learned model with respect to the task relevance factor $q^*$. We demonstrate that, when GPI is employed, the learned parameters exhibit improved estimation error with a reduction rate at $\frac{1-c}{1-cq^*}$ for some constant $c < 1$ (Theorem 2), where $q^*$ is defined in (24). From (24), it is clear that $q^*$ decreases as the distances between task-specific reward weights, denoted as $\|w^*_i - w^*_j\|_2$, become smaller. This indicates a close relationship between the previous tasks and the current task, resulting in a smaller $q^*$ and, consequently, a larger improvement through the usage of GPI. (T3) SF-DQN achieves a superior performance over conventional DQN by a factor of $\frac{1+\gamma}{2}$ for the estimation error of the optimal Q-function. When we directly transfer the learned knowledge of the Q-function to a new task without any additional training, our results demonstrate that SF-DQN always outperforms its conventional counterpart, DQN, by a factor of $\frac{1+\gamma}{2}$ (Theorems 3 and 4). As $\gamma$ approaches one, we raise the emphasis on long-term rewards, making the accumulated error derived from the incorrect Q-function more significant. Consequently, this leads to reduced transferability between the source tasks and the target task. Conversely, when $\gamma$ is small, indicating substantial potential for transfer learning between the source and target tasks, we observe a more significant improvement when using SF-DQN. 4.2 Assumptions We propose the assumptions in deriving our major theoretical results. These assumptions are commonly used in existing RL and neural network learning theories to simplify the presentation. Assumption 1. There exists a deep neural network with weights $\Theta^*_i$ such that it minimizes (12) for the $i$-th task, i.e., $f(\Theta^*_i) = 0$. Assumption 1 assumes a substantial expressive power of the deep neural network, allowing it to effectively represent $\psi^*$ in the presence of an unknown ground truth $\Theta^*$. Assumption 2. At any fixed outer iteration $t$, the behavior policy $\pi_t$ and its corresponding transition kernel $P_t$ satisfy $$\sup_{s \in S} d_{TV}\left(\mathbb{P}(s_{\tau} \in \cdot) \mid s_0 = s, P_t\right) \leq \lambda \nu^\tau, \quad \forall \tau \geq 0$$ for some constant $\lambda > 0$ and $\nu \in (0, 1)$, where $d_{TV}$ denotes the total-variation distance. Assumption 2 assumes the Markov chain $\{s_n, a_n, s_{n+1}\}$ induced by the behavior policy is uniformly ergodic with the corresponding invariant measure $P_t$. This assumption is standard in Q-learning (Xu & Gu, 2020; Zou et al., 2019; Bhandari et al., 2018), where the data are non-i.i.d. Assumption 3. For any $\Theta^{(t,0)} \in \mathbb{R}^n$ and $w^{(t,0)} \in \mathbb{R}^d$, the greedy policy $\pi_t$ at the $t$-th outer loop, i.e., $\pi_t(a|s) = \arg\max_{a'} Q_t(s, a')$, satisfies $$|\pi_t(a|s) - \pi^*(a|s)| \leq C \cdot \sup_{(s,a)} \|Q_t(s,a) - Q^*(s,a)\|_F,$$ where $C$ is a positive constant. Equivalently, when $Q_t = \psi(\Theta^{(t)})^\top w^{(t)}$, we have $$|\pi_t(a|s) - \pi^*(a|s)| \leq C \cdot (\|\Theta^{(t)} - \Theta^*\|_2 + \|w^{(t)} - w^*\|_2).$$ Assumption 3 indicates the policy difference between the behavior policy and the optimal policy. Moreover, (19) can be considered as a more relaxed variant of condition (2) in Zou et al. (2019) as (19) only necessitates the constant to hold for the distance of an arbitrary function from the ground truth, rather than the distance between two arbitrary functions. 4.3 Main Theoretical Findings 4.3.1 Convergence analysis of SF-DQN Theorem 1 demonstrates that the learned Q function converges to the optimal Q function when using SF-DQN for Task 1. Notably, GPI is not employed for the initial task, as we lack prior knowledge about the environment. Specifically, given conditions (i) the initial weights for $\psi$ are close to the ground truth as shown in (20), (ii) the replay buffer is large enough as in (21), and (iii) the distribution shift between the behavior policy and optimal policy is bounded (as shown in Remark), the learned parameters from Algorithm (1) for task 1, $\psi_1(\Theta_1)$ and $w_1$, converge to the ground truth $\psi^*_1$ and $w^*_1$ as in (22), indicating that the learned Q function converges to the optimal Q function as in (23). **Theorem 1** (Convergence analysis of SF-DQN without GPI). Suppose the assumptions in Section 4.2 hold and the initial neuron weights of the SF of task 1 satisfy $$\frac{\|\Theta^{(0)}_1 - \Theta^*_1\|_F}{\|\Theta^*_1\|_F} \leq (1 - c_N)\cdot \frac{\rho_1}{K^2},$$ for some positive $c_N$. When we select the step size as $\eta_t = \frac{1}{t+1}$, and the size of the replay buffer is $$N = \Omega(c_N^2\rho_1^{-1}\cdot K^2\cdot L^2d\log q),$$ Then, with the high probability of at least $1 - q^{-d}$, the weights $\theta^{(T)}$ from Algorithm 1 satisfy $$\|\Theta^{(T)}_1 - \Theta^*_1\|_2 \leq \frac{C_1 + C^*\cdot \|w^{(0)}_1 - w^*_1\|_2}{(1 - \gamma - c_N)(1 - \gamma)\rho_1 - C^*} \cdot \frac{\log^2 T}{T},$$ $$\|w^{(T)}_1 - w^*_1\|_2 \leq \left(1 - \frac{\rho_2}{\phi_{\max}}\right)^T \|w^{(0)}_1 - w^*_1\|_2,$$ where $C_1 = (2 + \gamma)\cdot R_{\max}$, and $C^* = |A|\cdot R_{\max}\cdot (1 + \log\nu\lambda^{-1} + \frac{1}{1-\nu})\cdot C$. Specifically, the learned Q-function satisfies $$\max_{s,a} |Q_1 - Q^*_1| \leq \frac{C_1 + \|w^{(0)}_1 - w^*_1\|_2}{(1 - \gamma - c_N)(1 - \gamma)\rho_1 - 1} \cdot \frac{\log^2 T}{T} + \|w^{(0)}_1 - w^*_1\|_2R_{\max}\left(1 - \frac{\rho_2}{\phi_{\max}}\right)^T.$$ **Remark 1** (upper bound of $C$): To ensure the meaningfulness of the upper bound in (23), specifically that the denominator needs to be greater than 0, $C$ has an explicit upper bound as $C \leq \frac{(1 - \gamma - c_N)(1 - \gamma)\rho_1}{|A|\cdot R_{\max}}$. Considering the definition of $C$ in Assumption 3, it implies that the difference between the behavior policy and the optimal policy is bounded. In other words, the fraction of bad tuples in the collected samples is constrained. **Remark 2** (Initialization): Note that (20) requires a good initialization. Firstly, it is still a state-of-the-art practice in analyzing Q-learning via deep neural network approximation. Secondly, according to the NTK theory (Jacot et al., 2018), there always exist some good local minima, which is almost as good as the global minima, near some random initialization. Finally, such a good initialization can also be adapted from some pre-trained models. ### 4.3.2 Improved Performance with Generalized Policy Improvement Theorem 2 establishes that the estimated Q function converges towards the optimal solution with the implementation of GPI as shown in (25), leveraging the prior knowledge learned from previous tasks. The enhanced performance associated with GPI finds its expression as $q^*$ defined in (24). Notably, when tasks $i$ and $j$ exhibit a higher degree of correlation, meaning that the distance between $w^*_i$ and $w^*_j$ for tasks $i$ and $j$ is minimal, we can observe a more substantial enhancement by employing GPI in the process of transferring knowledge from task $i$ to task $j$ from (25). **Theorem 2** (Convergence analysis of SF-DQN with GPI). Let us define $$q^* = \frac{(1 + \gamma)R_{\max}}{1 - \gamma} \cdot \min_{1 \leq i < j - 1} \frac{\|w^*_i - w^*_j\|_2}{\|\Theta^{(0)}_j - \Theta^*_j\|_2}.$$ Then, with the probability of at least $1 - q^{-d}$, the neuron weights $\Theta^{(T)}_j$ for the $j$-th task satisfy $$\|\Theta^{(T)}_j - \Theta^*_j\|_2 \leq \frac{C_1 + C^*\cdot \|w^{(0)}_j - w^*_j\|_2}{(1 - \gamma - c_N)(1 - \gamma)\rho_1 - \min\{q^*, 1\}\cdot C^*} \cdot \frac{\log^2 T}{T}.$$ **Remark 3** (Improvement via GPI): Utilizing GPI enhances the convergence rate from in the order of $\frac{1}{1-C^*}\cdot \frac{1}{T}$ to the order of $\frac{1}{1-q^*\cdot C^*}\cdot \frac{1}{T}$. When the distance between the source task and target tasks is small, $q^*$ can approach zero, indicating an improved generalization error by a factor of $1 - C^*$, where $C^*$ is proportional to the fraction of bad tuples. The improvement achieved through GPI is derived from the reduction of the distance between the behavior policy and the optimal policy, subsequently decreasing the fraction of bad tuples in the collected data. Here, $C^*$ is proportional to the fraction of bad tuples without using GPI, and $q^*\cdot C^*$ is proportional to the fraction of bad tuples when GPI is employed. 4.3.3 Bounds for Transfer Reinforcement Learning From Theorems 1 and 2, we have successfully estimated $Q_{n+1}^*$ for task $i$ using our proposed SF-DQN. When the reward changes to $r_{n+1}(s, a, s') = \phi^\top(s, a, s')w_{n+1}^*$ for a new task $T_{n+1}$, as long as we have estimated $w_{n+1}^*$, we can calculate the estimated Q-value function for $T_{n+1}$ simply by setting $$Q_{n+1}^*(s, a) = \max_{1 \leq j \leq n} \psi(\Theta_j^{(T)}; s, a)w_{n+1}^*.$$ (26) As $w_{n+1}^*$ experiences linear convergence to its optimal $w^*$, which is significantly faster than the sublinear convergence of $\Theta_{n+1}^{(t)}$, as shown in (22), this derivation of $Q_{n+1}$ in (26) simplifies the computation of $\Theta_{n+1}$ into a much more manageable supervised setting for approximating $w_{n+1}^*$ with only a modest performance loss as shown in (27). This is demonstrated in the following Theorem 3. **Theorem 3** (Transfer learning via SF-DQN). For the $(n+1)$-th task with $r_{n+1} = \phi^\top w_{n+1}^*$, suppose the Q-value function is derived based on (26), we have $$\max |Q_{n+1}^* - Q_{n+1}^*| \leq \frac{1 + \gamma}{1 - \gamma} \phi_{\text{max}} \min_{j \in [n]} \|w_j^* - w_{n+1}^*\|_2 + \frac{\|w_{n+1}^*\|_2}{(1 - \gamma) \cdot T}. $$ (27) **Remark 4** (Connection with existing works): The second term of the upper bound in (27), $\frac{\|w_{n+1}^*\|_2}{(1 - \gamma) \cdot T}$, can be explained as $\epsilon$ in Barreto et al. (2017), which results from the approximation error of the optimal Q-functions in the previous tasks. Without the SF decomposition as shown in (7), one can apply a similar strategy in (26) for DQN as $$Q_{n+1}^*(s, a) = \max_{1 \leq j \leq n} Q(\omega_j^{(T)}; s, a). $$ (28) In Theorem 4, (29) illustrates the performance of (28) through DQN. Compared to Theorem 3, transfer learning via DQN is worse than that via SF-DQN by a factor of $\frac{1 + \gamma}{2}$ when comparing the estimation error of the optimal function $Q_{n+1}^*$ in (27) and (29), indicating the advantages of using SFs in transfer reinforcement learning. **Theorem 4** (Transfer learning via DQN). For the $(n+1)$-th task with $r_{n+1} = \phi \cdot w_{n+1}^*$, suppose the Q-value function is derived based on (28), we have $$\max |Q_{n+1}^* - Q_{n+1}^*| \leq \frac{2}{1 - \gamma} \phi_{\text{max}} \min_{j \in [n]} \|w_j^* - w_{n+1}^*\|_2 + \frac{\|w_{n+1}^*\|_2}{(1 - \gamma) \cdot T}. $$ (29) **Remark 5** (Improvement by a factor of $\frac{1 + \gamma}{2}$): Transfer learning performance in SF-DQN is influenced by the knowledge gap between previous and current tasks, primarily attributed to differences in rewards and data distribution. In SF-DQN, the impact of reward differences is relatively small since $\phi$ that plays the role of reward remains fixed. The parameter $\gamma$ affects the influence of data distribution differences. A small $\gamma$ prioritizes immediate rewards, thereby the impact of data distribution on the knowledge gap is not significant. With a small $\gamma$, the impact of reward difference dominates, resulting in a high gap between SF-DQN and DQN in transfer learning. 4.4 Technical Challenges, Comparison with Existing Works Beyond deep learning theory: Challenges in deep reinforcement learning. The proof of Theorem 1 is inspired from the convergence analysis of one-hidden-layer neural networks within the (semi-)supervised learning domain (Zhong et al., 2017; Zhang et al., 2022). This proof tackles two primary objectives: i) the first objective involves characterizing the local convex region of the objective functions presented in (12) and (9); ii) the second objective focuses on quantifying the distance between the gradient defined in (15) and the gradient of the objective functions in (12) and (9). However, extending this approach from the (semi-)supervised learning setting to the deep reinforcement learning domain introduces additional challenges. First, we expand our proof beyond the scope of one-hidden-layer neural networks to encompass multi-layer neural networks. This extension requires new technical tools for characterizing the Hessian matrix and concentration bounds, as outlined in Appendix F.1. Second, the approximation error bound deviates from the supervised learning scenarios due to several factors: the non-i.i.d. of the collected data, the distribution shift between the behavior policy and the optimal policy, and the approximation error incurred when utilizing (16) to estimate (12). Addressing these challenges requires developing supplementary tools, as mentioned in Lemma 7. Notably, this approximation does not exhibit scaling behavior proportional to $\|\Theta_i - \Theta_i^*\|_2$, resulting in a sublinear convergence rate. Beyond DQN: challenges in GPI. The major challenges in proving Theorems 2-4 centers on deriving the improved performance by utilizing GPI. The intuition is as follows. Imagine we have two closely related tasks, labeled as $i$ and $j$, with their respective optimal weight vectors, $w_i^*$ and $w_j^*$, being close to each other. This closeness suggests that these tasks share similar rewards, leading to a bounded distributional shift in the data, which, in turn, implies that their optimal Q-functions should exhibit similarity. To rigorously establish this intuition, we aim to characterize the distance between these optimal Q-functions, denoted as $|Q_i^* - Q_j^*|$, in terms of the Euclidean distance between their optimal weight vectors, $||w_i^* - w_j^*||_2$ (See details in Appendix G). Furthermore, we can only estimate the optimal Q-function for previous tasks during the learning process, and such an estimation error accumulates in the temporal difference learning, e.g., the case of the SF learning of $\psi^*$. We need to develop novel analytical tools to quantify the error accumulating in the temporal difference learning (see details in Appendix C), which is unnecessary for supervised learning problems. 5 EXPERIMENTS This section summarizes empirical validation for the theoretical results obtained in Section 4 using a synthetic RL benchmark environment. The experiment setup and additional experimental results for real-world RL benchmarks are summarized in Appendix E. Convergence of SF-DQN with varied initialization. Figure 1 shows the performance of Algorithm 1 with different initial $w_1^{(0)}$ to the ground truth $w_1^*$. When the initialization is close to the ground truth, we observe an increased accumulated reward, which verifies our theoretical findings in (23) that the estimation error of the optimal Q-function reduces as $||w_1^{(0)} - w_1^*||_2$ decreases. ![Figure 1: Performance of SF-DQN presented in Algorithm 1 on Task 1.](image1) ![Figure 2: Transfer comparison for SF-DQN and DQN (with GPI)](image2) Performance of SF-DQN with GPI when adapting to tasks with varying relevance. We conducted experiments to investigate the impact of GPI with varied task relevance. Since the difference in reward mapping impacts data distribution shift, rewards, and consequently the optimal Q-function, we utilize the metric $||w_1^* - w_2^*||_2$ to measure the task irrelevance. The results summarized in Table 2 demonstrate that when tasks are similar (i.e., small $||w_1^* - w_2^*||_2$), SF-DQN with GPI consistently outperforms its counterpart without GPI. However, when tasks are dissimilar (i.e., large $||w_1^* - w_2^*||_2$), both exhibit same or similar performance, indicating that GPI is ineffective when two tasks are irrelevant. The observations in Table 2 validate our theoretical findings in (25), showing a more significant improvement in using GPI as $||w_1^* - w_2^*||_2$ decreases. | $||w_1^* - w_2^*||_2$ | = 0.01 | = 0.1 | = 1 | = 10 | |---------------------|--------|-------|-----|------| | SF-DQN (w/ GPI) | 0.986 ± 0.007 | 0.965 ± 0.007 | 0.827 ± 0.008 | 0.717 ± 0.012 | | SF-DQN (w/o GPI) | 0.942 ± 0.004 | 0.911 ± 0.013 | 0.813 ± 0.009 | 0.707 ± 0.011 | Comparison of the SF-DQN agent and DQN agent. From Figure 2, it is evident that the SF-DQN agent consistently achieves a higher average reward (task 2) than the DQN when starting training on task 2, where transfer learning occurs. These results strongly indicate the improved performance of the SF-DQN agent over the DQN, aligning with our findings in (27) and (29). SF-DQN benefits from reduced estimation error of the optimal Q-function compared to DQN when engaging in transfer reinforcement learning for relevant tasks. 6 CONCLUSION This paper analyzes the transfer learning performance of SF & GPI, with SF being learned using deep neural networks. Theoretically, we present a convergence analysis of our proposed SF-DQN with generalization guarantees and provide theoretical justification for its superiority over DQN without using SF in transfer reinforcement learning. We further verify our theoretical findings through numerical experiments conducted in both synthetic and benchmark RL environments. Future directions include exploring the possibility of learning $\phi$ using a DNN approximation and exploring the combination of successor features with other deep reinforcement learning algorithms. REFERENCES Zeyuan Allen-Zhu and Yuanzhi Li. Feature purification: How adversarial training performs robust deep learning. In *2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS)*, pp. 977–988. IEEE, 2022. Pierre-Luc Bacon, Jean Harb, and Doina Precup. The option-critic architecture. In *Proceedings of the AAAI conference on artificial intelligence*, volume 31, 2017. Ainesh Bakshi, Rajesh Jayaram, and David P Woodruff. Learning two layer rectified neural networks in polynomial time. In *Conference on Learning Theory*, pp. 195–268. PMLR, 2019. André Barreto, Will Dabney, Rémi Munos, Jonathan J Hunt, Tom Schaul, Hado P van Hasselt, and David Silver. Successor features for transfer in reinforcement learning. *Advances in neural information processing systems*, 30, 2017. Andre Barreto, Diana Borsa, John Quan, Tom Schaul, David Silver, Matteo Hessel, Daniel Mankowitz, Augustin Zidek, and Remi Munos. Transfer in deep reinforcement learning using successor features and generalised policy improvement. In *International Conference on Machine Learning*, pp. 501–510. PMLR, 2018. Dimitri Bertsekas and John N Tsitsiklis. *Neuro-dynamic programming*. Athena Scientific, 1996. Jalaj Bhandari, Daniel Russo, and Raghav Singal. A finite time analysis of temporal difference learning with linear function approximation. In *Conference on learning theory*, pp. 1691–1692. PMLR, 2018. Rajendra Bhatia. *Matrix analysis*, volume 169. Springer Science & Business Media, 2013. Alon Brutzkus and Amir Globerson. An optimization and generalization analysis for max-pooling networks. In *Uncertainty in Artificial Intelligence*, pp. 1650–1660. PMLR, 2021. Qi Cai, Zhuoran Yang, Jason D Lee, and Zhaoran Wang. Neural temporal-difference learning converges to global optima. *Advances in Neural Information Processing Systems*, 32, 2019. Antonio Coronato, Muddasar Naeem, Giuseppe De Pietro, and Giovanni Paragliola. Reinforcement learning for intelligent healthcare applications: A survey. *Artificial Intelligence in Medicine*, 109: 101964, 2020. Peter Dayan. Improving generalization for temporal difference learning: The successor representation. *Neural computation*, 5(4):613–624, 1993. Kefan Dong, Jiaqi Yang, and Tengyu Ma. Provable model-based nonlinear bandit and reinforcement learning: Shelve optimism, embrace virtual curvature. *Advances in Neural Information Processing Systems*, 34:26168–26182, 2021. Simon S Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes over-parameterized neural networks. In *International Conference on Learning Representations*, 2018. Simon S. Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes over-parameterized neural networks. In *International Conference on Learning Representations*, 2019. URL https://openreview.net/forum?id=SleK3i09YQ. Simon S Du, Jason D Lee, Gaurav Mahajan, and Ruosong Wang. Agnostic $q$-learning with function approximation in deterministic systems: Near-optimal bounds on approximation error and sample complexity. *Advances in Neural Information Processing Systems*, 33:22327–22337, 2020. Jianqing Fan, Zhaoran Wang, Yuchen Xie, and Zhuoran Yang. A theoretical analysis of deep q-learning. In *Learning for Dynamics and Control*, pp. 486–489. PMLR, 2020. Rong Ge, Jason D. Lee, and Tengyu Ma. Learning one-hidden-layer neural networks with landscape design. In *International Conference on Learning Representations*, 2018. URL https://openreview.net/forum?id=BkwHObbRZ.
xC8xh2RSs2
The paper uses exact keyword matching to identify corresponding subsections. Thus it's hard to know the proportion of dataset cards which covers the corresponding subsection but with different keywords.
Navigating Dataset Documentations in AI: A Large-Scale Analysis of Dataset Cards on Hugging Face Xinyu Yang * Cornell University xy468@cornell.edu Weixin Liang* Stanford University wxliang@stanford.edu James Zou Stanford University jamesz@stanford.edu Abstract Advances in machine learning are closely tied to the creation of datasets. While data documentation is widely recognized as essential to the reliability, reproducibility, and transparency of ML, we lack a systematic empirical understanding of current dataset documentation practices. To shed light on this question, here we take Hugging Face – one of the largest platforms for sharing and collaborating on ML models and datasets – as a prominent case study. By analyzing all 7,433 dataset documentation on Hugging Face, our investigation provides an overview of the Hugging Face dataset ecosystem and insights into dataset documentation practices, yielding 5 main findings: (1) The dataset card completion rate shows marked heterogeneity correlated with dataset popularity: While 86.0% of the top 100 downloaded dataset cards fill out all sections suggested by Hugging Face community, only 7.9% of dataset cards with no downloads complete all these sections. (2) A granular examination of each section within the dataset card reveals that the practitioners seem to prioritize Dataset Description and Dataset Structure sections, accounting for 36.2% and 33.6% of the total card length, respectively, for the most downloaded datasets. In contrast, the Considerations for Using the Data section receives the lowest proportion of content, accounting for just 2.1% of the text. (3) By analyzing the subsections within each section and utilizing topic modeling to identify key topics, we uncover what is discussed in each section, and underscore significant themes encompassing both technical and social impacts, as well as limitations within the Considerations for Using the Data section. (4) Our findings also highlight the need for improved accessibility and reproducibility of datasets in the Usage sections. (5) In addition, our human annotation evaluation emphasizes the pivotal role of comprehensive dataset content in shaping individuals’ perceptions of a dataset card’s overall quality. Overall, our study offers a unique perspective on analyzing dataset documentation through large-scale data science analysis and underlines the need for more thorough dataset documentation in machine learning research. 1 Introduction Datasets form the backbone of machine learning research [Koch et al., 2021]. The proliferation of machine learning research has spurred rapid advancements in machine learning dataset development, validation, and real-world deployment across academia and industry. Such growing availability of ML datasets underscores the crucial role of proper documentation in ensuring transparency, reproducibility, and data quality in research [Haibe-Kains et al., 2020; Stodden et al., 2018; Hutson, 2018]. Documentation provides details about the dataset, including sources of data, methods used to collect it, and preprocessing or cleaning that was performed. This information holds significant value for dataset users, as it facilitates a quick understanding of the dataset’s motivation and its overall scope. These insights are also crucial for fostering responsible data sharing and promoting interdisciplinary collaborations. *These authors contributed equally to this work. Despite numerous studies exploring the structure and content of dataset cards across various research domains (Afzal et al., 2020; Gebru et al., 2021; Papakyriakopoulos et al., 2023; Barman et al., 2023; Costa-jussà et al., 2020), there remains a notable gap in empirical analyses of community norms and practices for dataset documentation. This knowledge gap is significant because adherence to community norms and the quality of dataset documentation directly impact the transparency, reliability, and reproducibility in the field of data-driven research. For instance, inadequate dataset descriptions, structural details, or limitations can hinder users from utilizing the dataset appropriately, potentially resulting in misuse or unintended consequences; the absence of information on data cleaning and readiness assessment practices in data documentation limits dataset reusability and productivity gains. Furthermore, without a systematic analysis of current dataset documentation practices, we risk perpetuating insufficient documentation standards, which can impede efforts to ensure fairness, accountability, and equitable use of AI technologies. To address this question, we conducted a comprehensive empirical analysis of dataset cards hosted on Hugging Face, one of the largest platforms for sharing and collaborating on ML models and datasets, as a prominent case study. Dataset cards on the Hugging Face platform are Markdown files that serve as the README for a dataset repository. While several open-source platforms also facilitate the sharing of ML datasets, such as Kaggle, Papers with Code, and GitHub, we chose Hugging Face for two primary reasons. Firstly, it stands out as one of the most popular platforms for developers to publish, share, and reuse ML-based projects, offering a vast repository of ML datasets for study. Secondly, Hugging Face is one of the few open-source platforms that offer an official dataset card template. This feature not only enhances the accessibility and user-friendliness of the dataset card community but also makes the analysis process more efficient and informative. By analyzing all 7,433 dataset documentation hosted on Hugging Face, our investigation provides an overview of the Hugging Face dataset ecosystem and insights into dataset documentation practices. Based on our research findings, we emphasize the importance of comprehensive dataset documentation and offer suggestions to practitioners on how to write documentation that promotes reproducibility, transparency, and accessibility of their datasets, which can help to improve the overall quality and usability of the dataset community. Our study aims to bridge the notable gap in the community concerning data documentation norms, taking the first step toward identifying deficiencies in current practices and offering guidelines for enhancing dataset documentation. Figure 1: Systematic Analysis of 24,065 Datasets Hosted on Hugging Face. (a) Exponential Growth of Datasets: The Hugging Face platform has seen a remarkable surge in the number of datasets, with the count doubling approximately every 18 weeks. (b) Power Law in Dataset Usage: Dataset downloads on Hugging Face follow a power-law distribution, as indicated by the linear relationship on the log-log plot. The top 82 datasets account for 80% of the total downloads; datasets with documentation dominate the top downloaded datasets. (c) Documentation Associated with Usage: Despite only 30.9% of dataset repositories (7,433 out of 24,065) featuring non-empty dataset cards, these datasets account for an overwhelming 95.0% of total download traffic on the platform. 2 OVERVIEW Finding - **Exponential Growth of Datasets:** The number of datasets on Hugging Face doubles every 18 weeks. - **Documentation Associated with Usage:** 95.0% of download traffic comes from the 30.9% of datasets with documentation. Exponential Growth of Datasets Our analysis encompasses 24,065 dataset repositories on Hugging Face uploaded by 7,811 distinct user accounts as of March 16th, 2023 (see Table S5 for varying documentation practices by creators). The number of datasets exhibits exponential growth, with a weekly growth rate of 3.97% and a doubling time of 18 weeks (Fig. 1a). As a sanity check, the number of dataset repositories reached 35,973 by May 23rd, 2023, confirming the exponential trend. Power Law in Dataset Usage Although Hugging Face has seen a significant increase in the number of dataset repositories, our analysis reveals a significant imbalance in dataset downloads, which follows a power law distribution. This means that a small proportion of the most popular datasets receive the majority of the downloads, while the vast majority of datasets receive very few downloads. In fact, our analysis shows that just the 82 datasets with the most downloads account for 80% of total downloads (Fig. 1b). Fig. S4 further demonstrates that the power law distribution persists across various task domains, even with the varied number of datasets within each domain. Documentation Associated with Usage Despite the importance of dataset cards, only 58.2% (14,011 out of 24,065 dataset repositories contributed by 4,782 distinct user accounts) include dataset cards as Markdown README.md files within their dataset repositories. Among these, 6,578 dataset cards are empty, resulting in only 30.9% (7,433 out of 24,065 dataset repositories contributed by 1,982 distinct user accounts) featuring non-empty dataset cards (Fig. 1c). As illustrated in Fig. 1d, dataset cards are prevalent among the most downloaded datasets. Notably, datasets with non-empty dataset cards account for 95.0% of total download traffic, underscoring a potential positive correlation between dataset cards and dataset popularity. For the rest of the paper, we focus our analyses on these 7,433 non-empty dataset cards. We sort these non-empty dataset cards based on the number of downloads for the corresponding datasets. So top $k$ dataset cards (e.g. $k = 100$) refer to the dataset cards corresponding to the $k$ most downloaded datasets. 3 STRUCTURE OF DATASET DOCUMENTATIONS Finding - **The dataset card completion rate shows marked heterogeneity correlated with dataset popularity:** While 86.0% of the top 100 downloaded datasets fill out all sections suggested by the Hugging Face community, only 7.9% of dataset cards with no downloads complete all these sections. | Section Title | Subsection Title | Description | |------------------------|-----------------------------------|-----------------------------------------------------------------------------| | Dataset Description | Dataset Summary | A brief summary of the dataset, including its intended use, supported tasks, an overview of how and why the dataset was created, etc. | | | Supported Tasks and Leaderboards | Brief description of the tag, metrics, and suggested models of the dataset. | | | Languages | The languages represented in the dataset. | | Dataset Structure | Data Instances | JSON-formed example and description of a typical instance in the dataset. | | | Data Fields | List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. | | | Data Splits | Criteria for splitting the data; descriptive statistics for the features, such as size, average length, etc. | | Dataset Creation | Curation Rationale | Motivation for the creation of the dataset. | | | Source Data | The source of the data (e.g. news text and headlines, social media posts, translated sentences, etc.), including the data collection process, and data producer. | | | Annotations | Annotation process, annotation tools, annotators, etc. | | | Personal and Sensitive Information| Statement of whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, financial or health data, etc.). | | Considerations for Using the Data | Social Impact of Dataset | Discussion of the ways the use of the dataset will impact society. | | | Discussion of Biases | Descriptions of specific biases that are likely to be reflected in the data. | | | Other Known Limitations | Other limitations of the dataset, like annotation artifacts. | | Additional Information | Dataset Curators | The people involved in collecting the dataset and their affiliation(s). | | | Licensing Information | The license and link to the license webpage if available. | | | Citation Information | The BibTeX-formatted reference for the dataset. | | | Contributions | ‘Thanks to @github-username for adding this dataset.’ | Table 1: Community-Endorsed Dataset Card Structure. This table shows the sections and their suggested subsections provided by the Hugging Face community, along with their descriptions. For more information, please refer to [https://github.com/huggingface/datasets/blob/main/templates/README_guide.md](https://github.com/huggingface/datasets/blob/main/templates/README_guide.md). Community-Endorsed Dataset Card Structure Grounded in academic literature (Mitchell et al., 2019) and official guidelines from Hugging Face (HuggingFace, 2021), the Hugging Face community provides suggestions for what to write in each section. This community-endorsed dataset card provides a standardized structure for conveying key information about datasets. It generally contains 5 sections: Dataset Description, Dataset Structure, Dataset Creation, Considerations for Using the Data, and Additional Information (Table 1). To examine the structure of dataset cards, we used a pipeline that detects exact word matches for each section title. We then identified the section titles and checked whether they had contents (Appendix B.1). If a dataset card had all five sections completed, we considered it to be following the community-endorsed dataset card. Adherence to Community-Endorsed Guidelines Correlates with Popularity Our evaluation found that popular datasets have better adherence to the dataset card community-endorsed dataset card structure. As illustrated in Fig. 2, compliance with the template varies significantly among datasets with different download counts. Among the 7,433 dataset cards analyzed, 86.0% of the top 100 downloaded dataset cards have completed all five sections of the community-endorsed dataset card, while only 7.9% of dataset cards with no downloads follow it. Fig. S5 further reveals that popular dataset cards achieve higher completion in all Hugging Face-recommended sections. This implies a potential correlation between adherence to community-endorsed guidelines and dataset popularity. 4 Practitioners Emphasize Description and Structure Over Social Impact and Limitations Finding • Practitioners seem to prioritize on Dataset Description and Dataset Structure sections, which account for 36.2% and 33.6% of the total card length, respectively, on the top 100 most downloaded datasets. • In contrast, the Considerations for Using the Data section receives the lowest proportion of content, just 2.1%. The Considerations for Using the Data section covers the social impact of datasets, discussions of biases, and limitations of datasets. Social Impact, Dataset Limitations and Biases are Lacking in Most Documentations Following the community-endorsed dataset card, we conducted an analysis to determine the level of emphasis placed on each section. Fig. 3b shows the word count distribution among the top 100 downloaded dataset cards, revealing their high level of comprehensiveness: 91.0% of them have a word count exceeding 200. We step further into these dataset cards to examine the emphasis placed on each section. We calculated the word count of each section and its proportion to the entire dataset card. As shown in Fig. 3c, the Dataset Description and Dataset Structure sections received the most attention, accounting for 36.2% and 33.6% of the dataset card length, respectively. On the other hand, the Considerations for Using the Data section received a notably low proportion of only 2.1%. Section Length Reflects Practitioner Attention The length of sections within dataset cards is reflective of practitioner attention, and it varies significantly based on the popularity of the dataset. Highly downloaded datasets tend to have more comprehensive and longer dataset cards (Fig. 3a), with an emphasis on the Dataset Description and Dataset Structure sections (Fig. 3d). Conversely, less popular datasets have shorter cards (Fig. 3b) with a greater emphasis on the Additional Information section (Fig. 3f). Despite this, sections such as Dataset Creation and Considerations for Using the Data consistently receive lower attention, regardless of download rates (Fig. 3f). This suggests a need to promote more comprehensive documentation, particularly in critical sections, to enhance dataset usage and facilitate ethical considerations. Figure 3: Section Length Reflects Practitioner Attention. (a) Popularity Correlates with Documentation Length: The top downloaded dataset cards are longer, indicating that they contain more comprehensive information. (b) Distribution of Word Count Among Top 100 Downloaded Dataset Cards (c) Section Length Proportions in Top 100 Downloaded Dataset Cards: The Dataset Description and Dataset Structure sections dominate in the top 100 downloaded dataset cards, with proportions of 36.2% and 33.6%, respectively. In contrast, the Considerations for Using the Data section receives the least attention, with a proportion of only 2.1%. (d) Section Length Proportion Changes over Downloads: The section length proportion changes over downloads, with Dataset Description and Dataset Structure decreasing in length, and Additional Information and Other increasing. Notably, there is a consistently low emphasis placed on the Dataset Creation and Considerations for Using the Data sections across all dataset cards with different downloads. 5 UNDERSTANDING CONTENT DYNAMICS IN DATASET DOCUMENTATION Finding • Strong Community Adherence to Subsection Guidelines: Practitioners contributing to the Hugging Face community exhibit high compliance with standards, filling out 14 of the 17 recommended subsections across five main sections at a rate exceeding 50%. • Emergence of the Usage Section Beyond the Community Template: Surprisingly, 33.2% of dataset cards includes a Usage section. The community template does not include such Usage section in its current form and should include one in the future. Section Content Detection Pipeline To gain a deeper understanding of the topics discussed in each section, we conducted a content analysis within each section of the community-endorsed dataset card structure, which includes suggested subsections within the five main sections. We used exact keyword matching to identify the corresponding subsections and calculate their filled-out rates. Fig. 4 shows that 14 out of 17 subsections have filled-out rates above 50%, indicating adherence to the community-endorsed dataset cards. Limitation Section is Rare, but Long if it Exists The Considerations for Using the Data section (i.e., limitation section), despite being frequently overlooked and often left empty by practitioners, holds particular significance. When this section is included, it tends to adhere well to community guidelines, with subsections having a completion rate exceeding 50% and a reasonably substantial word count (98.2 words). This suggests that this section has the potential to provide valuable insights and guidance. This motivates our use of topic modeling to identify key discussion topics within this section, potentially aiding practitioners in crafting meaningful content. Figure 4: Highlighting the Hugging Face Community’s Compliance with Subsection Guidelines. This figure shows subsection filled-out rates within different sections, stratified by download counts. Each section has multiple subsections, with bars representing the filled-out rate of each subsection. Green texts indicate filled-out rates above 50%, while red texts indicate rates below 50%. Of the 17 subsections within the five sections of the community-endorsed dataset, 14 have filled-out rates above 50%. | Topic | Representative Sentences | |-------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------| | Technical or Research Scope | • Adding a Spanish resource may help others to improve their research and educational activities. | | | • The creation of the dataset contributes to expanding the scope of NLP research to under-explored languages across the world. | | Social Scope or Background | • This dataset can be used to gain insights into the social, cultural, and political views of people in African countries. | | | • If this matter isn’t tackled with enough urgency, we might see the rise of a new dark era in Latin America politics, where many unscrupulous parties and people will manage to gain power and control the lives of many people. | | Topic | Representative Sentences | |-------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------| | Subpopulation Biases | • Gender speakers distribution is imbalanced, percentage of female speakers is mostly lower than 50% across languages. | | | • The social biases of the time in terms of race, sex, gender, etc. might be encountered in this dataset. | | Biases from Collection Procedure | • With respect to the potential risks, we note that the subjectivity of human annotation would impact on the quality of the dataset. | | | • In terms of data collection, by using keywords and user mentions, we are introducing some bias to the data, restricting our scope to the list of keywords and users we created. | | Topic | Representative Sentences | |-------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------| | Data Quality | • The nature of the task introduce a variability in the quality of the target translations. | | | • A number of errors, omissions and inconsistencies are expected to be found within the corpus. | | Processing Limitation | • Our augmentation process can sometimes create nonexistent versions of real people. | | | • Satellite annotation is not as accurate for pixel-level representation due to single-point annotations. | Figure 5: Key Topics in Considerations for Using the Data through Topic Modeling Analysis. This figure displays the outcomes of the topic modeling assessment on the contents of the (a) Social Impact of Dataset Subsection, (b) Discussion of Biases Subsection, and (c) Other Known Limitations Subsection. Each panel illustrates the human-assigned topic label and representative sentences for each section. Topics are generated by Latent Dirichlet Allocation (LDA). Limitation Section Covers Diverse and Crucial Topics The Considerations for Using the Data section (i.e., limitation section) encompasses diverse and crucial topics. The Hugging Face community emphasizes three major themes within this section: Social Impact of Dataset, Discussion of Biases, and Other Known Limitations. The Social Impact of Dataset aspect explores not only societal implications but also the potential benefits to technology and research communities. In this section, practitioners discuss issues like how the dataset can expand the scope of NLP research (Armstrong et al., 2022), and increase access to natural language technology across diverse regions and cultures (Tache et al., 2021). Additionally, the subsection covers sensitive topics related to politics, ethics, and culture within the social scope. **Discussion of Biases** delves into subpopulation bias and data collection biases, highlighting the importance of addressing bias-related issues. Previous research have identified numerous technical and social biases such as subgroup bias (Buolamwini & Gebru, 2018), data collection bias (Wang et al., 2019), and label bias (Jiang & Nachum, 2020). Our topic modeling results reveal that two primary biases are discussed by practitioners in this subsection. The first is subpopulation bias, which includes biases related to gender, age, or race. For instance, an audio dataset (Nsoesie & Galea, 2022) notes that female speakers are underrepresented, comprising less than 50% of the dataset. The second major bias arises from the data collection process, specifically the annotation process, which is often a significant bottleneck and source of errors. Lastly, **Other Known Limitations** focuses on technical limitations, particularly data quality and processing limitations. This comprehensive coverage underscores the multifaceted nature of considerations related to dataset usage. Data quality is often a focus in other disciplines, such as the social sciences and biomedicine, and there are many insights to draw upon (Paulada et al., 2021; Fedorov, 2010; Pan & Geerts, 2012). Meanwhile, processing limitations encompass a broader range of issues beyond biases from the collection procedure, such as inaccuracies or the absence of some data points. **Emergence of the Usage Section Beyond the Community Template** While Hugging Face’s community-endorsed dataset card structure comprises five main sections, there are instances where practitioners encounter valuable information that doesn’t neatly fit into these sections. These additional sections, referred to as **Other** sections, can contain important content. Notably, among these **Other** sections, discussions related to **Usage** emerge as a frequent (nearly one-third of the time, 33.2%) and significant theme. These **Usage** sections offer a diverse range of information, including details on downloading, version specifications, and general guidelines to maximize the dataset’s utility. This highlights the importance of considering content that falls outside the predefined template and suggests a potential area for improvement in dataset card templates. **Quantifying the Impact of Usage Section on Dataset Downloads** To assess the influence of a **Usage** section in dataset documentation, we conducted a counterfactual analysis experiment (Appendix, C). We trained a BERT (Devlin et al., 2018) model using dataset card content and download counts, which were normalized to fall within the range of [0, 1] for meaningful comparisons. When a dataset card that initially included a **Usage** section had this section removed, there was a substantial decrease of 1.85% in downloads, with statistical significance. This result underscores the significant impact of the **Usage** section in bolstering dataset accessibility and popularity, emphasizing its pivotal role in enhancing the documentation and usability of datasets. 6 ANALYZING HUMAN PERCEIVED DATASET DOCUMENTATION QUALITY **Finding** - Our human annotation evaluation emphasizes the pivotal role of comprehensive dataset content in shaping individuals’ perceptions of a dataset card’s overall quality. **Human Annotations for Comprehensive Evaluation of Dataset Card Quality** We utilized human annotations to evaluate the quality of dataset cards, considering seven distinct aspects, drawing from prior research in dataset documentation literature and the Hugging Face community-endorsed dataset card (Afzal et al., 2020; Gebru et al., 2021; Papakyriakopoulos et al., 2023; Barman et al., 2023; Costa-jussà et al., 2020): (1) Structural Organization, (2) Content Comprehensiveness, (3) Dataset Description, (4) Dataset Structure, (5) Dataset Preprocessing, (6) Usage Guidance, and (7) Additional Information. While Dataset Description, Dataset Structure, and Additional Information can be found in sections of community-endorsed dataset cards, we added evaluation aspects highlighted in the literature, like aspects that constitute the overall presentation (Structural Organization and Content Comprehensiveness), Data Preprocessing and Usage Guidance. To conduct this assessment, we randomly selected a subset containing 150 dataset cards and engaged five human annotators. These annotators were tasked with evaluating each dataset card across these seven aspects and providing an overall quality score within a range of 5 (Appendix B.2). The overall quality is assessed through the subjective perception of human annotators, taking into account the seven aspects as well as their overall impression. This evaluation approach aims to provide a comprehensive assessment of dataset card quality, reflecting the importance of these aspects in effective dataset documentation. **Human Perception of Documentation Quality Strongly Aligns with Quantitative Analysis** Human annotation evaluation of dataset cards shows varying scores across different aspects. While Dataset Description (2.92/5), Structural Organization (2.82/5), Data Structure (2.7/5), and Content Comprehensiveness (2.48/5) received relatively higher scores, areas like Data Preprocessing (1.21/5) and Usage Guidance (1.14/5) scored lower. This aligns with the quantitative analysis that indicates a greater emphasis on the Dataset Description and Dataset Structure sections. Notably, even the highest-scoring aspect, Dataset Description, falls below 60% of the highest possible score, indicating room for improvement in dataset documentation. **Content Comprehensiveness has the strongest positive correlation with the overall quality of a dataset card (Coefficient: 0.3935, p-value: 3.67E-07), emphasizing the pivotal role of comprehensive dataset content in shaping individuals’ perceptions of a dataset card’s overall quality. Additionally, aspects like Dataset Description (Coefficient: 0.2137, p-value: 3.04E-07), Structural Organization (Coefficient: 0.1111, p-value: 2.17E-03), Data Structure (Coefficient: 0.0880, p-value: 6.49E-03), and Data Preprocessing (Coefficient: 0.0855, p-value: 2.27E-03) also significantly contribute to people’s evaluations of dataset documentation quality. Moreover, the length of a dataset card is positively related to Content Comprehensiveness (p-value: 1.89E-011), reinforcing the importance of detailed documentation in enhancing dataset quality and usability.** ### 7 RELATED WORKS Dataset has long been seen as a significant constraint in the realm of machine learning research (Halevy et al., 2009; Sun et al., 2017). The process of creating datasets remains arduous and time-intensive, primarily due to the costs of curation and annotation (IBM, 2020). Moreover, the quality of data assumes a pivotal role in shaping the outcomes of machine learning research (Liang et al., 2022). Consequently, a profound understanding of datasets is indispensable in the context of machine learning research, and this understanding is most effectively conveyed through comprehensive dataset documentation. A long-standing problem in the literature is that there is no industry standard being formed about data documentation. Therefore, much existing work in the literature has been in exploring, conceptualizing and proposing different dataset documentation frameworks. Data-focused tools such as datasheets for datasets and data nutrition labels have been proposed to promote communication between dataset creators and users, and address the lack of industry-wide standards for documenting AI datasets (Bender & Friedman, 2018; Bender et al., 2021; Pushkarna et al., 2022; Gebru et al., 2021; Holland et al., 2018; Chmielinski et al., 2022; Papakyriakopoulos et al., 2023). Additionally, there are studies that concentrate on leveraging human-centered methods to scrutinize the design and evaluation aspects of dataset documentation (Fabris et al., 2022; Mahajan & Shaikh, 2021; Hanley et al., 2020; Hutiri et al., 2022). In the library domain, numerous works have proposed methods to tackle the absence of universally accepted guidelines for publishing library-linked data. These efforts are aimed at enhancing data quality, promoting interoperability, and facilitating the discoverability of data resources (Villazon-Terrazas et al., 2011; Hidalgo-Delgado et al., 2017; Abida et al., 2020). These tools and frameworks provide detailed information on the composition, collection process, recommended uses, and other contextual factors of datasets, promoting greater transparency, accountability, and reproducibility of AI results while mitigating unwanted biases in AI datasets. Additionally, they enable dataset creators to be more intentional throughout the dataset creation process. Consequently, datasheets and other forms of data documentation are now commonly included with datasets, helping researchers and practitioners to select the most appropriate dataset for their particular needs. Despite the proliferation of dataset documentation tools and the growing emphasis on them, the current landscape of dataset documentation remains largely unexplored. In this paper, we present a comprehensive analysis of AI dataset documentation on Hugging Face to provide insights into current dataset documentation practices. 8 DISCUSSION In this paper, we present a comprehensive large-scale analysis of 7,433 AI dataset documentation on Hugging Face. The analysis offers insights into the current state of adoption of dataset cards by the community, evaluates the effectiveness of current documentation efforts, and provides guidelines for writing effective dataset cards. Overall, our main findings cover 5 aspects: • **Varied Adherence to Community-Endorsed Dataset Card:** We observe that high-downloaded dataset cards tend to adhere more closely to the community-endorsed dataset card structure. • **Varied Emphasis on Sections:** Our analysis of individual sections within dataset cards reveals that practitioners place varying levels of emphasis on different sections. For instance, among the top 100 downloaded dataset cards, *Dataset Description* and *Dataset Structure* sections receive the most attention. In contrast, the *Considerations for Using the Data* section garners notably lower engagement across all downloads, with only approximately 2% of dataset cards containing this section. This discrepancy can be attributed to the section’s content, which involves detailing limitations, biases, and the societal impact of datasets – a more complex and nuanced endeavor. An internal user study conducted by Hugging Face ([HuggingFace](https://huggingface.co)) also identified the *Limitation* section within this category as the most challenging to compose. • **Topics Discussed in Each Section:** Our examination of subsections within each section of dataset cards reveals a high completion rate for those suggested by the Hugging Face community. This highlights the effectiveness of the community-endorsed dataset card structure. In particular, our study places a special focus on the *Considerations for Using the Data* section, employing topic modeling to identify key themes, including technical and social aspects of dataset limitations and impact. • **Importance of Including Usage Sections:** We observe that many dataset card creators go beyond the recommended structure by incorporating *Usage* sections, which provide instructions on effectively using the dataset. Our empirical experiment showcases the potential positive impact of these *Usage* sections in promoting datasets, underscoring their significance. • **Human Evaluation of Dataset Card Quality:** Our human evaluation of dataset card quality aligns well with our quantitative analysis. It underscores the pivotal role of Content Comprehensiveness in shaping people’s assessments of dataset card quality. This finding offers clear guidance to practitioners, emphasizing the importance of creating comprehensive dataset cards. Moreover, we establish a quantitative relationship between Content Comprehensiveness and the word length of dataset cards, providing a measurable method for evaluation. **Limitations and Future Works** Our analysis of ML dataset documentation relies on the distinctive community-curated resource, Hugging Face, which may introduce biases and limitations due to the platform’s structure and coverage. For example, Hugging Face’s NLP-oriented concentration could introduce biases into the dataset categories. However, our method is transferable and could easily be reproduced for another platform, facilitating future studies (Appendix E). Additionally, our analysis of completeness and informativeness is based on word count and topic modeling, which may not fully capture the nuances of the documentation. Furthermore, measuring dataset popularity based on downloads alone may not fully reflect the dataset’s impact. Future research could consider additional factors, such as the creation time of the dataset and research area of the dataset (Appendix D). Lastly, our human evaluation serves as a preliminary evaluation. Future analyses could involve a more diverse group of annotators with varying backgrounds and perspectives. **Research Significance** To summarize, our study uncovers the current community norms and practices in dataset documentation, and demonstrates the importance of comprehensive dataset documentation in promoting transparency, accessibility, and reproducibility in the AI community. We hope to offer a foundation step in the large-scale empirical analysis of dataset documentation practices and contribute to the responsible and ethical use of AI while highlighting the importance of ongoing efforts to improve dataset documentation practices. REPRODUCIBILITY STATEMENT We have assembled a collection of dataset cards as a community resource, which includes extracted metadata such as the number of downloads and textual analyses. This resource along with our analysis code can be accessed at https://github.com/YoungXinyu1802/HuggingFace-Dataset-Card-Analysis. The Hugging Face datasets can be accessed through the Hugging Face Hub API, which is available at https://huggingface.co/docs/huggingface_hub/package_reference/hf_api. ACKNOWLEDGMENTS We thank Yian Yin and Nazneen Rajani for their helpful comments and discussions. J.Z. is supported by the National Science Foundation (CCF 1763191 and CAREER 1942926), the US National Institutes of Health (P30AG059307 and U01MH098953) and grants from the Silicon Valley Foundation and the Chan-Zuckerberg Initiative. REFERENCES Rabeb Abida, Emna Hachicha Belghith, and Anthony Cleve. An end-to-end framework for integrating and publishing linked open government data. In 2020 IEEE 29th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE), pp. 257–262, 2020. doi: 10.1109/WETICE49692.2020.00057. Shazia Afzal, Rajmohan C, Manish Kesarwani, Sameep Mehta, and Hima Patel. Data readiness report, 2020. Ruth-Ann Armstrong, John Hewitt, and Christopher Manning. Jampatoinsli: A jamaican patois natural language inference dataset. arXiv preprint arXiv:2212.03419, 2022. Nabajeet Barman, Yuriy Reznik, and Maria Martini. Datasheet for subjective and objective quality assessment datasets, 2023. Emily M Bender and Batya Friedman. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587–604, 2018. Emily M Bender, Batya Friedman, and Angelina McMillan-Major. A guide for writing data statements for natural language processing, 2021. Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency, pp. 77–91. PMLR, 2018. Kasia S Chmielinski, Sarah Newman, Matt Taylor, Josh Joseph, Kemi Thomas, Jessica Yurkofsky, and Yue Chelsea Qiu. The dataset nutrition label (2nd gen): Leveraging context to mitigate harms in artificial intelligence. arXiv preprint arXiv:2201.03954, 2022. Marta R. Costa-jussà, Roger Creus, Oriol Domingo, Albert Domínguez, Miquel Escobar, Cayetana López, Marina Garcia, and Margarita Geleta. Mt-adapted datasheets for datasets: Template and repository, 2020. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018. URL http://arxiv.org/abs/1810.04805 Alessandro Fabris, Stefano Messina, Gianmaria Silvello, and Gian Antonio Susto. Tackling documentation debt: A survey on algorithmic fairness datasets. In Proceedings of the 2nd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, EAAMO ’22, New York, NY, USA, 2022. Association for Computing Machinery. ISBN 9781450394772. doi: 10.1145/3551624.3555286. URL https://doi.org/10.1145/3551624.3555286 Wenfei Fan and Floris Geerts. Foundations of data quality management. Synthesis Lectures on Data Management, 4(5):1–217, 2012.
Kz3yckpCN5
The implicit assumption of this work (revealed in the title) is that there exists a claim or understanding that imitating proprietary language models by sampling their outputs for training is all that is needed to achieve performance parity - however, I contend that this isn't the prevalent understanding.
THE FALSE PROMISE OF IMITATING PROPRIETARY LANGUAGE MODELS Arnav Gudibande*, Eric Wallace*, Charlie Snell* Xinyang Geng, Hao Liu, Pieter Abbeel, Sergey Levine, Dawn Song UC Berkeley {arnavg, ericwallace, csnell22}@berkeley.edu ABSTRACT An emerging method to cheaply improve a weaker language model is to finetune it on outputs from a stronger model, such as a proprietary system like ChatGPT (e.g., Alpaca, Self-Instruct, and others). In this work, we critically analyze this approach of imitating language models. We first finetune a series of LMs that imitate ChatGPT using varying base model sizes (1.5B–13B), data sources, and imitation data amounts (0.3M–150M tokens). We then evaluate the models using crowd raters and canonical NLP benchmarks. Initially, we were surprised by the output quality of our imitation models—they appear far better at following instructions, and crowd workers rate their outputs as competitive with ChatGPT. However, when conducting more targeted automatic evaluations, we find that imitation models close little to none of the gap from the base LM to ChatGPT on tasks that are not heavily supported in the imitation data. We show that these performance discrepancies may slip past human raters because imitation models are adept at mimicking ChatGPT’s style but not its factuality. Overall, we conclude that while model imitation can be useful for training models to follow instructions and avoid toxic outputs, it falls short its full promise in many ways. In particular, there exists a substantial capabilities gap between open and closed LMs that we find cannot be bridged merely by adding more imitation data. Instead, we find that fine-tuning more capable base LMs has a significantly more substantial effect on closing this gap. In turn, we argue that the higher leverage action for improving open-source models is to tackle the difficult challenge of developing better base LMs, rather than taking the shortcut of imitating proprietary systems. 1 INTRODUCTION The recent release of powerful language models (LMs) such as ChatGPT (OpenAI, 2022), Bard (Pichai, 2023), and Claude (AnthropicAI, 2023) might herald a future where the best AI systems are provided primarily as a fee-based API by large companies. At the same time, open-source LMs are becoming increasingly accurate, with models like LLaMA (Touvron et al., 2023) and FLAN-T5 (Chung et al., 2022) providing many of the same basic capabilities as their commercial counterparts, albeit at a lower level of performance (Touvron et al., 2023; Chung et al., 2022). This presents an important question, whose answer will have profound future implications: will the most powerful LMs be closed-source or will they be freely distributed for anyone to use, modify, and extend? Both possibilities have important pros and cons, and implications on policy, corporate strategy, and the future of scientific inquiry. In this work, we study one possible resolution to this question: model imitation (Wallace et al., 2020; Orekondy et al., 2019). The premise of model imitation is that once a proprietary LM is made available via API, one can collect a dataset of API outputs and use it to fine-tune an open-source LM. In theory, this imitation process may provide an easy method to distill (Hinton et al., 2014) the capabilities of any proprietary model, thus implying that open-source LMs will always be competitive with their commercial counterparts. To date, recent works have looked to imitate OpenAI’s best systems, e.g., Self-Instruct (Wang et al., 2023) and Alpaca (Taori et al., 2023), and initial results suggest that these models have achieved near parity with proprietary models. Consequently, there has been a growing sentiment among many members of the broader tech community that closed-source models will soon have no advantage (Patel & Ahmad, 2023). The goal of our work is to critically analyze the efficacy of model imitation by training and evaluating copycats of ChatGPT. We first collect datasets that focus on either imitating ChatGPT for a specific Figure 1: Crowdworkers initially rate the quality of our imitation models highly, as ~70% of their outputs are rated as equal or better than those of ChatGPT (left). However, as we train on more imitation data, our models fail to further close the gap, and even begin to regress along other axes, e.g., factual knowledge according to Natural Questions (center). Our main conclusion is that the biggest limitation of current open-source LMs is their weaker base capabilities. In turn, the best way for the open-source community to improve models is by increasing these capabilities (e.g., via scaling, better pretraining data, etc.) rather than fine-tuning on more and more imitation data (right). We then fine-tune LMs on these datasets using a range of model sizes (1.5B–13B), base models (GPT-2 and LLaMA), and data amounts (0.3M–150M tokens). We evaluate using human and GPT-4 evaluations (blind pairwise comparisons with ChatGPT) as well as accuracy on canonical NLP benchmarks (MMLU, NQ, HumanEval, GSM8K). We were initially surprised by how much imitation models improve over their base models: they are far better at following instructions, and their outputs appear similar to ChatGPT’s. This was further supported by both human and GPT-4 evaluations, where the outputs of our best imitation model were rated as competitive with ChatGPT (e.g., Figure 1, left). However, when conducting more targeted automatic evaluations, we found that the imitation models close little to none of the large gap between LLaMA and ChatGPT. In particular, we demonstrate that imitation models improve on evaluation tasks that are heavily supported in the imitation training data. On the other hand, the models do not improve (or even decline in accuracy) on evaluation datasets for which there is little support. For example, training on 100k ChatGPT outputs from broad-coverage user inputs provides no benefits to Natural Questions accuracy (e.g., Figure 1, center), but training exclusively on ChatGPT responses for Natural-Questions-like queries drastically improves task accuracy. Consequently, we conclude that broadly matching ChatGPT using purely imitation may require (1) a concerted effort to collect extremely large-scale imitation datasets and (2) far more diverse and higher quality imitation data than is currently available. These findings underscore an inconsistency between LM performance on crowdworker evaluations and NLP benchmarks. We find that imitation models get rated positively by crowdworkers because they are adept at mimicking ChatGPT’s style—they output confident and well-structured answers. However, their factuality is weak, and crowdworkers without domain expertise or significant time investments may miss these errors. Overall, our key takeaway is that model imitation is not a free lunch: there exists a capabilities gap between today’s open-source LMs and their closed-source counterparts that cannot be closed by cheaply fine-tuning on imitation data. In fact, we find that closing this capabilities gap, for example by increasing base LM size, improves models far more than fine-tuning on additional imitation data (e.g., Figure 1, right). This implies that the higher leverage action for improving open-source LMs is to tackle the difficult challenge of developing better base models (e.g., by scaling up models, improving pre-training data quality, improving pre-training, etc.), rather than taking the shortcut of imitating proprietary systems. Nevertheless, we believe that model imitation has utility in subverting the need to annotate high-quality finetuning data if one has a sufficiently strong base LM. 2 What is Model Imitation? Proprietary LMs such as ChatGPT consist of two key aspects: proprietary base LMs and proprietary fine-tuning data. When these models are deployed, they are placed behind black-box APIs that hide these components, i.e., users can query the API with arbitrary inputs but cannot see the model’s Query: How does actor critic improve over REINFORCE? ChatGPT Response Actor-critic algorithms are an extension of the REINFORCE algorithm that combine both policy-based and value-based methods. Some of the changes that actor-critic algorithms make over standard REINFORCE: 1. Function Approximation: Actor-critic algorithms typically use function approximation methods to represent the value and policy functions. This allows them to learn in high-dimensional state spaces. 2. Critic Network: In actor-critic algorithms, there is an additional critic network that estimates the value function. The critic provides an estimate of the value of a state, which is then used to update the policy. This reduces the variance of the update and stabilizes learning. 3. Advantage Estimation: Actor-critic algorithms use the advantage function to estimate the quality of an action taken by the policy. The advantage function is the difference between the return of an action and the estimated value of a state. This reduces the variance of the update and allows for more efficient learning. Overall, actor-critic algorithms provide a more efficient and stable way of learning in reinforcement learning tasks, particularly in high-dimensional state spaces. Imitation Model Response Actor-critic algorithms are a type of reinforcement learning algorithm that improves the REINFORCE algorithm by combining the actor (policy) and critic (value) components. The main changes that actor-critic algorithms make over standard REINFORCE are: 1. Actor-critic algorithms use a single critic to evaluate the value function of the policy, whereas standard REINFORCE uses a separate critic for each action. 2. Actor-critic algorithms use a single actor to determine the actions to take, whereas standard REINFORCE uses a separate actor for each action. 3. Actor-critic algorithms learn the policy and actor simultaneously, whereas standard REINFORCE learns the policy and actor separately. 4. Actor-critic algorithms use an optimistic estimate of the value function to determine the actor’s actions, whereas standard REINFORCE uses the critic’s estimate. Overall, actor-critic algorithms combine the critic’s and actor’s estimates in order to learn both the policy and actor simultaneously, while standard REINFORCE learns them separately. This can lead to faster convergence and potentially better generalization, but the choice between the two algorithms depends on the specific problem and the desired behavior of the agent. Figure 2: ChatGPT and our best imitation model produce answers with similar style—they start with an overview paragraph, a list of differences, and end with a summary. However, while ChatGPT’s answer is mostly correct, the imitation model’s answer is completely inaccurate despite sounding authoritative. We show correct sentences in green, ambiguously-correct sentences in yellow, and incorrect ones in red. training data, next-token probabilities, and architecture. In model imitation, the goal is to collect data using the API to train an LM that achieves comparable performance to it, i.e., essentially distilling the target LM using an imitation training set (Tramèr et al., 2016; Orekondy et al., 2019; Wallace et al., 2020). Potential reasons for performing imitation range from benign to illegal: - Academics can use powerful imitation LMs to drive new research projects. - Companies can use imitation LMs to launch services that compete with the proprietary system. - Malicious users could use imitation models to accelerate progress on nefarious use cases. Local versus Broad Imitation When performing model imitation, one will either look to perform local “task-specific” imitation or more global “broad-coverage” imitation. The former imitates the target model on just a specific task or domain, e.g., sentiment analysis of tweets or question answering over Wikipedia entities. The latter focuses on the more ambitious goal of broadly imitating the target model across its full spectrum of behaviors, domains, and tasks. Broad-coverage imitation is challenging because (1) one must collect an extremely diverse imitation dataset and (2) imitation models must capture this wide data distribution and generalize similarly to the target model on a myriad of held-out examples. Recent Work on Model Imitation A surge of recent publications have attempted to both locally imitate proprietary models for specific tasks (Sun et al., 2023; Hsieh et al., 2023; Honovich et al., 2022) and broadly imitate models, e.g., Alpaca (Taori et al., 2023), Vicuna (Chiang et al., 2023), Koala (Geng et al., 2023), GPT4ALL (Anand et al., 2023), and more (Wang et al., 2023; Peng et al., 2023). Many these works conclude that their imitation models achieve near parity with the target model, e.g., Vicuna claims to achieve 90% of the quality of ChatGPT and Google Bard. These claims have since been propagated out into the broader tech community, leading many to believe that open-source LMs are rapidly closing the gap to their closed-source counterparts and that top AI companies will soon have no competitive advantage (Patel & Ahmad, 2023). Our goal. The goal of our paper is to critically evaluate this line of reasoning. In particular, we train models to imitate ChatGPT while experimenting with different decisions (e.g., data collection strategies, data amounts, and base LMs) and conducting rigorous automatic and human evaluations. 3 Building Imitation Datasets We consider both task-specific and broad-coverage imitation. For either form of model imitation, one must curate a set of inputs to query to the target model. In practice, one may have a set of inputs in mind (e.g., sentences from Wikipedia, tweets about Coca-Cola) and if this set of input examples is sufficiently large, one can use them to query the target model and build an imitation dataset. In cases when it is impractical or labor intensive to create a large and diverse pool of inputs, one can also create synthetic examples by prompting LMs to iteratively generate examples that are from the same distribution as an initial smaller seed set of inputs (Wang et al., 2023; Honovich et al., 2022). Task-specific imitation For task-specific imitation, we focus on question answering and abstractive text summarization. We describe both of these below with additional details in Appendix A: • NQ-synthetic: For question answering, we created an imitation dataset tailored to Natural Questions (Kwiatkowski et al., 2019a), i.e., factual knowledge about Wikipedia entities. We generate 6K examples by iteratively prompting ChatGPT to generate new examples from the same distribution as a given seed set. • TLDR-Synthetic: For summarization, we use generate ChatGPT summaries for a set of 200k passages from the tl;dr summarization dataset (Völske et al., 2017). For evaluation, we follow the procedure in (Stiennon et al., 2022), and report ROUGE-1 score on the CNN/Daily Mail news summarization (Chen et al., 2016) test set (see Appendix D for additional evaluations). Broad-coverage imitation For the more ambitious goal of broad-coverage imitation, we leverage the fact that models such as ChatGPT have become so popular that their inputs and outputs are already widely posted on the web. Thus, we can collect a large, diverse, and generally high-quality dataset of examples for free without ever having to interact with the company’s API. In particular, we collect examples from three sources: • ShareGPT: we use approximately 90K dialogues shared by users on the website ShareGPT. To maintain data quality, we deduplicated on the query level and removed any non-English conversations using a language detector. This leaves approximately 50K examples, each of which consist of multiple turns of dialogue. • HC3 (Guo et al., 2023): we use the ChatGPT responses from the English Human-ChatGPT Comparison Corpus. This contains ~27K ChatGPT responses for ~24K questions. • Discord ChatGPT Bots: we use 10k input-output examples collected from the r/ChatGPT and Turing Al Discord servers, two public channels that allow users to interact with ChatGPT bots. We refer to this dataset as ShareGPT-Mix and show qualitative examples in Appendix A. We find that ShareGPT-Mix is generally of high quality. First, there is high diversity in the instructions: for each user query in the dataset, the most similar other user query has an average BLEU score similarity of just 8%. This is considerably lower than that of other datasets such as Super-NaturalInstructions (Wang et al., 2022), which is at 61% BLEU similarity for a similarly sized set of examples. We also manually reviewed different examples and logged their semantic category (see Table 6 in Appendix A). The dataset contains diverse categories, including many multi-lingual conversations and coding tasks. 4 Main Results We train imitation LMs using our ShareGPT-Mix and NQ-synthetic datasets, and we conduct both human and automatic evaluations. We focus our initial results on the ShareGPT-Mix models. 4.1 Training and Evaluation Setup We study how model imitation improves as we increase the amount of imitation data and vary the capabilities of the underlying base LM. We consider decoder-only models ranging in size from 1.5B Figure 3: We find that GPT-4 and crowdworker evaluations show the same trends. As we scale up the amount of imitation data, GPT-4’s ratings of our imitation models are relatively flat (left). However, as we scale up the base model size, GPT-4’s rates the quality of our imitation models increasingly highly (right). to 13B parameters: GPT-2 1.5B (Radford et al., 2019), LLaMA 7B (Touvron et al., 2023), and LLaMA 13B.\footnote{We use model scale as a proxy for base-model quality, however model quality could also improved by other factors such as the quality of pre-training data, architectural improvements, novel pre-training methods, etc.} We also study the effect by data scale by fine-tuning with different sized data subsets. During training, we chunk the conversations into 2048 tokens blocks. We introduce special tokens that demarcate the beginning of each user query and model output. We fine-tune using standard LM losses on only the model outputs. Following Chowdhery et al. (2022); Chung et al. (2022), we train for one epoch using the AdamW optimizer with gradients re-scaled by the magnitude of each weight. We use a learning rate of $2e^{-3}$ with 1000 steps of linear warm-up from 0, and we train with batch size 32. All models are trained in JAX using a combination of fully shared data parallelism and tensor parallelism on TPUs hosted by Google Cloud or on a single Nvidia DGX server with 8 A100 GPUs. For automatic evaluations, we measure performance on 5-shot MMLU (Hendrycks et al., 2021), 3-shot Natural Questions (Kwiatkowski et al., 2019b), 0-shot HumanEval (Chen et al., 2021b), and 6-shot chain-of-thought GSM8K (Cobbe et al., 2021). We report the original scoring metrics associated with each dataset (e.g., exact match for NQ). For human evaluation, we conduct blind pairwise output comparisons using Mechanical Turk. In our UI, we present each rater with a task instruction and the output of two unknown models, one of which is ChatGPT and the other is one of our imitation models (see Figure 7 in Appendix B). The raters select which output they prefer or if the two outputs are equal in quality. We use approximately 70 crowd workers and evaluate on 255 held-out prompts.\footnote{To mitigate any test-set leakage, we filtered out queries with a BLEU score greater than 20% with any example from our training set. We also removed non-English and coding-related prompts, as these cannot be reliably reviewed by crowd workers. We pay the evaluators roughly $15/hour based on the average time it takes to complete a task. We select workers with $\geq 95\%$ approval rating, are located in an English-speaking country, and have at least 100 HITs completed.} We report the average preference across the dataset and one standard deviation around the mean. Additionally, we conduct evaluations using GPT-4 and present additional details of the prompts used in Appendix C. We will release all of our training code, pre-trained models, and human evaluation test-set.\footnote{Training codebase available at https://github.com/young-geng/EasyLM, test-set available at https://github.com/arnav-gudibande/koala-test-set, and models available at https://huggingface.co/young-geng/koala.} 4.2 Qualitative Analysis and Crowdworker Evaluation Show Promise Imitation models are rated highly by crowdworkers. We were initially surprised at the quality of our ShareGPT-mix models: while the base GPT-2 or LLaMA models often fail to follow instructions, the imitation models produce outputs that stay on task. These initial promises were further supported, Figure 4: Automatic evaluations. As we increase the amount of imitation data, there is little improvement on various benchmarks, or even performance regressions (top). On the other hand, scaling up the base LM steadily improves results (bottom), suggesting that the key difference between open-source and closed-source LMs is a raw capabilities gap, rather than the finetuning data used. as crowdworkers and GPT-4 often rated the quality of the imitation models’ outputs as equal or better than those of ChatGPT, especially as we scale up model size (right of Figure 1 and 3). However, we also find that human ratings quickly saturate as we scale up the amount of imitation data (left of Figure 1 and 3), alluding to possible shortcomings of this approach. 4.3 Targeted Automatic Evaluations Expose Failure Modes Broad-coverage imitation models fail to close the gap across most tasks. We next ran targeted automatic evaluations to isolate whether specific model capabilities improved after imitation. We found that across every benchmark that we measured, ShareGPT-mix imitation models do not improve (or even decline) in accuracy as compared to the base model, even when adding additional imitation data (Figure 4, top). This shows that imitating ChatGPT on our broad-coverage imitation data does not improve the model across most axes, e.g., factual knowledge, coding, and problem solving. We argue that this occurs because ChatGPT has captured far more knowledge and capabilities from the web as compared to LLaMA. In turn, it is unreasonable to expect that a small amount of imitation data (e.g., 1000x less data than pre-training) would enable one to bridge this gap. Instead, we argue that broadly matching ChatGPT using weaker base LMs such as LLaMA-13B would require a concerted effort to collect an extremely large and diverse imitation dataset that is far closer to the scale of pretraining. It is currently unclear whether such an effort is worth undertaking or feasible. Training local imitation models is far more successful. On the other hand, our model trained to locally imitate ChatGPT using the NQ-synthetic data is far more successful. In particular, the imitation models’ performance improves significantly as compared to the LLaMA base model (see Table 1) and quickly approaches the accuracy of ChatGPT. This demonstrates that it is far more feasible to distill a specific behavior from ChatGPT as opposed to broadly matching its capabilities. A empirical trade-off exists between different evaluation datasets. A curious phenomena is that training on more ShareGPT-Mix data hurts performance as compared to the base model on some of our evaluations (compare the black versus blue lines in Figure 4). We believe that these performance regressions arise from a distribution shift and tension between the conversational-style fine-tuning data and the downstream benchmarks. An open problem is whether these performance regressions can be mitigated using regularization or by mixing in pre-training data during fine-tuning. | Model | Imitation Data | NQ | CNN | |---------|----------------------|-----|-----| | 7B | – | 17 | 22.1| | 7B | ShareGPT-Mix | 10 | 28.7| | 7B | Targeted Imitation | 22 | 29.2| | 13B | – | 20 | 27.3| | 13B | ShareGPT-Mix | 15 | 30.7| | 13B | Targeted Imitation | 27 | 33.6| | ChatGPT | – | 31 | 39.9| Table 1: We train imitation models on broad-coverage data from ShareGPT-Mix or targeted data (NQ-synthetic or TLDR-Synthetic). The broad-coverage models do not improve on zero-shot NQ (or even degrade in performance) and only improve slightly on CNN summarization, demonstrating the limitations of imitating the capabilities of ChatGPT holistically. However, the models trained on targeted data substantially close the gap to ChatGPT on both NQ and CNN summarization, showing that local imitation of a model is far more feasible in practice. **Improving base LMs is the highest leverage action.** Rather than increasing imitation data size, we find that using better base LMs (by increasing base model size) does lead to substantial accuracy improvements (Figure 4, bottom). This aligns with our previous claim: there exists a capabilities gap between today’s open-source LMs and their closed-source counterparts that cannot be closed by cheaply fine-tuning on imitation data. Instead, the best way to improve open-source LMs is to tackle the difficult challenge of developing better base LMs, whether it be via model scaling or other means. ### 4.4 IMITATION MODELS LEARN STYLE, NOT CONTENT Finally, we investigate why there is a strong discrepancy between crowdworker evaluations, where imitation models appear quite strong, and results on NLP benchmarks, where imitation models appear no better than base LMs. We find that imitation models perform well according to human evaluations because they are adept at mimicking ChatGPT’s style—they output fluent, confident, and well-structured answers. In particular, we show in Table 2 that as we add more imitation data, ChatGPT and our imitation models produce outputs with a similar length, similar word choice, similar use of an authoritative tone, and similar low-level structure (e.g., use of lists). However, as shown in our previous automatic evaluations, the imitation models have weak factuality. In other words, imitation models actually embody some of the worst aspects of AI assistants: their answers sound confident but are less factual than ChatGPT. This is perhaps best elucidated in Figure 2, where the imitation model outputs an answer that is similar in style to ChatGPT’s answer but is completely incorrect. **Human evaluation is increasingly hard.** Unfortunately, crowd workers without domain expertise or significant time investments can easily be deceived by stylistic components—answers that sound confident and correct are often spuriously chosen more often. To improve human evaluation, it is thus increasingly necessary to both engage domain experts, but also to curate a set of highly difficult prompts that can rigorously test different models’ capabilities. Surprisingly, our GPT-4 evaluations also showed the same trends as our crowdworker evaluations (albeit with a slightly larger absolute preference for ChatGPT’s outputs). While this suggests that GPT-4 may be a viable candidate to cheaply emulate human evaluations on some tasks, it also implies that LLMs may replicate some human-like cognitive biases. We look forward to future work that further investigates this possibility. **Imitation models inherit the safety and toxicity style of the teacher model.** Finally, despite imitation only providing benefits in mimicking the “style” or “persona” of the target model, there is | Metric | LLaMA | 20M | 80M | 150M | ChatGPT #2 | |--------------------------------------------|-------|-----|-----|------|------------| | If ChatGPT outputs a list, do we? | 13% | 50% | 67% | 81% | 83% | | If ChatGPT outputs a summary paragraph, do we? | 2% | 40% | 42% | 48% | 55% | | Unigram intersection w/ ChatGPT’s output | 19.5 | 40.4| 41.9| 42.5 | 49.2 | | Pearson correlation in length w/ ChatGPT’s output | -0.11 | 0.51| 0.62| 0.62 | 0.67 | | Outputs are in authoritative tone according to GPT-4 | 57% | 99% | 98% | 98% | 98% | Table 2: As we add more imitation data, the style of our models’ outputs are increasingly similar to those of ChatGPT. In particular, we generate outputs from our imitation models and compare them to a random ChatGPT response across different metrics. We also report a rough “upper bound” by comparing a second random ChatGPT output to the original ChatGPT response (ChatGPT #2). still value in doing so. For example, OpenAI has carefully and deliberately trained ChatGPT to be “harmless” to end users, often avoiding toxic outputs and refusing to respond to questionable user requests. We find that our imitation models also inherit these components. In particular, we show in Figure 5 that as we finetune on more imitation data, the imitation model’s outputs become less toxic on RealToxicityPrompts (Gehman et al., 2020), as the model learns to abstain in a similar fashion to ChatGPT. Consequently, we conclude that model imitation is highly effective in cases when one has a powerful base LM and is looking to subvert the need to annotate expensive finetuning data. 5 DISCUSSION Finetuning as a simple knowledge extractor. Our results show that a modest amount of finetuning provides little to no improvements on an LM’s knowledge or capabilities. We thus agree with the view that pre-training is the main source of an LM’s capabilities, and that finetuning acts as a lightweight method to train the model to extract its own knowledge Schulman (2023). This is the reason why improving models by imitating ChatGPT on a small set of data is insufficient, as the base knowledge is largely unaffected. Furthermore, this view suggests that during finetuning time, you may even want to avoid introducing new knowledge (i.e., do not imitate better models), as you will otherwise be training the model to guess or hallucinate its answers, rather than actually doing the task as intended (Gao, 2021; Goldberg, 2023; Schulman, 2023). Should you be worried about imitation? Imitating proprietary LMs comes with many potential implications for small and large companies alike. Our results suggest that the efficacy of model imitation is limited when there is a large gap between the base and target LM. Thus, we believe that companies who can establish a capabilities gap using large amounts of data, compute, or algorithmic advances are the ones who are best positioned to build and maintain competitive advantages. On the other hand, companies that look to build moats by using off-the-shelf LMs with proprietary fine-tuning datasets may be comparatively more vulnerable to imitation. Potential confounders to our findings. While we believe our findings are well supported, there are a few potential hidden confounders that could change our conclusions. First, as we are unaware of the pre-training data used by ChatGPT, it is possible that some of the tasks that we evaluate on could have been contaminated into ChatGPT’s training data, thus inflating its accuracy numbers. Moreover, to conduct imitation, we perform supervised learning on the outputs from the target model. However, it also may be possible to use the target model to perform RLHF or constitutional AI (Christiano et al., 2017; OpenAI, 2022; Bai et al., 2022) to further improve results. Lastly, we only considered relatively simple methods for collecting imitation data, however, there may be more advanced methods (e.g., active learning) that may improve the effectiveness or efficiency of model imitation. Implications for other forms of model imitation There has been a flurry of recent work that performs model imitation in more indirect ways than we study here. For example, the training process of many recent vision-language model (Li et al., 2022; Liu et al., 2023; Ye et al., 2023; Zhu et al., 2023) includes ChatGPT or GPT-4 outputs at some stages. Furthermore, it has become common to use large LMs in various ways during the data annotation and creation process, e.g., to aid crowd workers, to perform data augmentation, to identify mislabeled data, and more. Our findings may have implications for these approaches, e.g., it is likely that vision-language models that include OpenAI data may have similar failure modes to the ones described in our work. **Technical limitations of model imitation** Imitating proprietary models also has various technical limitations: the models inherit the weaknesses and biases of proprietary models, imitation does not allow one to directly improve on the design decisions of closed AI companies (e.g., data annotation strategies), and these systems are roughly upper-bounded by the capabilities of the target proprietary model. Moreover, it is difficult to answer certain scientific questions using imitation models because they include proprietary black-box models in their training pipeline. 6 RELATED WORK **Model distillation** Model imitation is similar to model distillation (Hinton et al., 2014), where one trains a student model to imitate a teacher. While conceptually similar, there are several major practical differences. For distillation, the training data, model architecture, and hyperparameters are known for the teacher. In model imitation, one tries to imitate the teacher without this knowledge. Moreover, for distillation it is common to use training objectives that utilize the probability distribution of the teacher whereas in stealing such a distribution is typically unavailable. **Past work on model imitation** Prior work has shown that model imitation is possible for various domains (Lowd & Meek, 2005; Tramèr et al., 2016; Orekondy et al., 2019), including language classifiers (Krishna et al., 2020; Pal et al., 2019) and machine translation systems (Wallace et al., 2020). Nevertheless, past work considers a setting where models are trained from scratch, and thus the main proprietary nature of a model is the company’s internal training data. In our setting, systems like ChatGPT are proprietary because they also leverage OpenAI’s internal pre-trained LMs that are stronger than any available open-source LM. **Defending against model imitation** Our results show that imitation is a moderate concern for companies. In turn, there is a need to develop methods to mitigate or detect imitation. There is an existing body of work in this direction, e.g., one can detect whether a particular model is trained via imitation (Juuti et al., 2019; Szyller et al., 2019; Krishna et al., 2020; Maini et al., 2021) or slow model stealing by sacrificing some performance (Orekondy et al., 2020; Dziedzic et al., 2022a; Wallace et al., 2020; Dziedzic et al., 2022b). Unfortunately, existing methods often exhibit too severe of a tradeoff to be deployable in practice. 7 CONCLUSION AND FUTURE WORK In this work, we critically analyzed the efficacy of model imitation. We showed that imitation can indeed improve the style, persona, and instruction adherence of open-source LMs. However, imitation falls short in improving LMs across more challenging axes such as factuality, coding, and problem solving. On one hand, these results indicate that businesses can successfully establish and safeguard a competitive advantage by pre-training powerful base models. Conversely, it also implies that if two groups possess equally competent base LMs, one can easily mimic the persona and behavior of the other model, without needing to annotate expensive fine-tuning data. Moving forward, our findings raise a range of technical and societal questions. First, we show that existing crowd worker evaluations have trouble elucidating the differences between imitation models and proprietary ones, despite clear differences existing between them. In turn, the future of human evaluation remains unclear: how can we cheaply and quickly probe the utility of a powerful LLM? Second, given the large gap between LLaMA and ChatGPT (the latter model is faster, cheaper, and more accurate), and the insufficiencies of model imitation, there are obvious open questions on how to best improve open-source LMs (e.g., increasing model scale, improving pre-training data quality, developing new pretraining methods, etc). Finally, our work raises ethical and legal questions, including whether the open-source community should continue to advance progress by directly imitating company products, as well as what countermeasures companies can take to protect and license their intellectual property. In future work, we hope to delve deeper into these issues and devise better methods for the ethical and responsible deployment of LMs. ACKNOWLEDGEMENTS We thank Nicholas Carlini, the members of Berkeley NLP, and the members of Berkeley RAIL for valuable feedback on this project. Eric Wallace is supported by the Apple Scholars in AI/ML Fellowship. Part of this research was supported with Cloud TPUs from Google’s TPU Research Cloud (TRC). REFERENCES Yuvanesh Anand, Zach Nussbaum, Brandon Duderstadt, Benjamin Schmidt, and Andriy Mulyar. GPT4All: Training an assistant-style chatbot with large scale data distillation from GPT-3.5-Turbo, 2023. AnthropicAI. Introducing claude, 2023. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional AI: Harmlessness from AI feedback. arXiv preprint arXiv:2212.08073, 2022. Danqi Chen, Jason Bolton, and Christopher D. Manning. A thorough examination of the cnn/daily mail reading comprehension task. 2016. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code. 2021a. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021b. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing GPT-4 with 90%* ChatGPT quality, 2023. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, et al. PaLM: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. NIPS, 2017. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Łukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems, 2021. Adam Dziedzic, Nikita Dhawan, Muhammad Ahmad Kaleem, Jonas Guan, and Nicolas Papernot. On the difficulty of defending self-supervised learning against model extraction. In ICLR, 2022a. Adam Dziedzic, Muhammad Ahmad Kaleem, Yu Shen Lu, and Nicolas Papernot. Increasing the cost of model extraction with calibrated proof of work. In ICLR, 2022b.
dBO8ZPQMVF
"Can you help me understand better the relationship between MAS and a standard diffusion model? Can (...TRUNCATED)
"MAS: Multi-view Ancestral Sampling for 3D Motion Generation Using 2D Diffusion\n\nAnonymous authors(...TRUNCATED)
TYXtXLYHpR
"In the related works, you distinguish your method from shapelet-based methods, stating that these a(...TRUNCATED)
"Towards Transparent Time Series Forecasting\n\nKrzysztof Kacprzyk \nUniversity of Cambridge \nkk7(...TRUNCATED)
FvK2noilxT
"In Sec 4.1 Training dataset, why did the authors use different standard deviations to noise the MAN(...TRUNCATED)
"GENEOH DIFFUSION: TOWARDS GENERALIZABLE HAND-OBJECT INTERACTION DENOISING VIA DENOISING DIFFUSION\n(...TRUNCATED)
otHZ8JAIgh
"Since both PID and PIB rely on sampling from distribution, it does seem that the performance will i(...TRUNCATED)
"Prototypical Information Bottlenecking and Disentangling for Multimodal Cancer Survival Prediction\(...TRUNCATED)
9NKRfhKgzI
"I'm having difficulty understanding the reason for conditioning on u for formalizing Goal 2 and Goa(...TRUNCATED)
"Adversarially Robust and Privacy-Preserving Representation Learning via Information Theory\n\nAnony(...TRUNCATED)
Uj2Wjv0pMY
"Since this paper is about the new dataset which is claimed to focus on error recognition, however t(...TRUNCATED)
"Put on your detective hat: What’s wrong in this video? A Dataset for Error Recognition in Procedu(...TRUNCATED)
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
84