Paper_ID
stringlengths
10
10
Question
stringlengths
201
1.81k
ocr_output
stringlengths
252
54k
LhNZqkuVte
The paper mentions that HyperMask has some limitations in terms of memory consumption due to the requirement for the hypernetwork's output layer to match the number of parameters in the target network. Are there any potential strategies or approaches to mitigate this memory consumption issue?
HYPERMASK: ADAPTIVE HYPERNETWORK-BASED MASKS FOR CONTINUAL LEARNING Anonymous authors Paper under double-blind review ABSTRACT Artificial neural networks suffer from catastrophic forgetting when they are sequentially trained on multiple tasks. To overcome this problem, there exist many continual learning strategies. One of the most effective is the hypernetwork-based approach. The hypernetwork generates the weights of a target model based on the task’s identity. The model’s main limitation is that hypernetwork can produce completely different nests for each task. Consequently, each task is solved separately. The model does not use information from the network dedicated to previous tasks and practically produces new architectures when it learns the subsequent tasks. To solve such a problem, we use the lottery ticket hypothesis, which postulates the existence of sparse subnetworks, named winning tickets, that preserve the performance of a full network. In the paper, we propose a method called HyperMask, which trains a single network for all tasks. Hypernetwork produces semi-binary masks to obtain target subnetworks dedicated to new tasks. This solution inherits the ability of the hypernetwork to adapt to new tasks with minimal forgetting. Moreover, due to the lottery ticket hypothesis, we can use a single network with weighted subnets dedicated to each task. 1 INTRODUCTION Learning from a continuous data stream is challenging for deep learning models. Artificial neural networks suffer from catastrophic forgetting (McCloskey & Cohen [1989]) and drastically forget previously known information upon learning new knowledge. Continual learning (CL) [Hsu et al., 2018] effectively learns consecutive tasks, preventing forgetting already learned ones. Continuous learning is a rapidly developing field of machine learning that utilizes various techniques. Regularization-based methods [Kirkpatrick et al., 2017; Chaudhry et al., 2020; Jung et al., 2020; Titsias et al., 2019; Mirzadeh et al., 2020] aim to keep the learned information about previous tasks by regularizing it to previous weights. Rehearsal-based methods [Rebuffi et al., 2017; Chaudhry et al., 2018; Saha et al., 2020] use a set of real or generated data from previous tasks. Architecture-based approaches [Mallya et al., 2018; Serra et al., 2018; Li et al., 2019; Wortsman et al., 2020; Kang et al., 2022] suggest that interference between tasks can be reduced by using newly developed architectural elements. The hypernetwork [von Oswald et al., 2019; Henning et al., 2021] approach is located at the crossroads of regularization-based and architecture-based approaches. A hypernetwork architecture [Ha et al., 2016] is a neural network that generates weights for a separate target network designated to solve a specific task. In a continual learning setting, a hypernetwork generates the weights of a target model based on the task identity. Such models can be considered an architecture-based approach, since we build a new architecture for each task. On the other hand, we can treat hypernetwork like a regularization model. At the end of training, we have a single meta-model, which produces dedicated weights. Due to the ability to generate completely different weights for each task, hypernetwork-based models feature minimal forgetting. Unfortunately, such properties were obtained by producing completely different architectures for substantial tasks. Only hypernetworks use information on tasks. Such a model can produce different nests for each task and solve them separately. The hypernetwork cannot use the weight of the target network from the previous task. To solve such a problem, we use the lottery ticket hypothesis (LTH) \cite{Frankle2018}, which postulates that we can find subnetworks named winning tickets with performance similar (or even better) to the full architecture. However, the search for optimal winning tickets in continual learning scenarios is difficult \cite{Mallya2018,Wortsman2020}, as iterative pruning requires repetitive pruning and retraining for each arriving task, which is impractical. Alternatively, Winning SubNetworks (WSN) \cite{Kang2022} incrementally learns model weights and task-adaptive binary masks. WSN eliminates catastrophic forgetting by freezing the subnetwork weights considered important for the previous tasks and memorizing masks for all tasks. Our paper proposes a method called HyperMask\footnote{The source code is available at \url{https://github.com/...}}, which combines hypernetwork and lottery ticket hypothesis paradigms. Hypernetwork produces semi-binary masks to the target network to obtain weighted subnetworks dedicated to new tasks; see Fig. 1. The masks produced by the hypernetwork modulate the weights of the main network and act like dynamic filters, enhancing the target weights that are important for a given task and decreasing the importance of the remaining weights. In consequence, we work on a single network with subnetworks dedicated to each task and we do not need to freeze any part of this model. When HyperMask learns a new task, we reuse the learned subnetwork weights from the previous tasks. HyperMask also inherits the ability of the hypernetwork to adapt to new tasks with minimal forgetting. We produce a semi-binary mask directly from the trained task embedding vector, which creates a dedicated subnetwork for each dataset. To the best of our knowledge, our model is the first architecture-based CL model that uses hypernetwork, or, in general, any meta-model, for producing masks for other networks. Updates of hypernetworks are prepared not directly for the weights of the main model, like in \cite{vonOswald2019}, but for masks dynamically filtering the target model. Our contributions can be summarized as follows: - We propose a method that uses the hypernetwork paradigms for modeling the lottery ticket-based subnetwork. The hypernetwork modulates the weights of the main model instead of their direct preparation as in \cite{vonOswald2019}. - HyperMask inherit the ability to reuse weights from the lottery ticket module and adapt to new tasks from the hypernetwork paradigm. - The semi-binary mask of HyperMask helps the target network to discriminate classes in consecutive CL tasks, see Fig. 2. 2 RELATED WORKS Continual learning Typically, continual learning approaches are divided into three main categories: regularization, dynamic architectures, and replay-based techniques \cite{Parisi2019,DeLange2021,Wang2023}. Regularization-based techniques expand the loss function by using regularization terms that control the distance between optimal parameters from the previous task and the new one. We hypothesize that the best parameters for a new task can be located in the neighborhood of nest parameters from prior tasks. In the case of weight regularization, we regularize the variation of the most important network parameters. In EWC \cite{Kirkpatrick2017,Ritter2018}, the importance is expressed by the Fisher information matrix. SI \cite{Zenke2017} approximates the contribution of the parameter to the total loss variation and its update length throughout the training trajectory. MAS \cite{Aljundi2018} accumulates importance measurements based on the sensitivity of predictive results to changes in parameters, both online and unsupervised. In the case of function regularization, we use the regularization term not on weights but on the intermediate or final output of the prediction function. In the learning without forgetting paradigm (LwF) (Li & Hoiem, 2017), we use distillation loss to compare new task outputs generated by the new and old models. LwM (Dhar et al., 2019) takes advantage of attention maps for training samples. EBLL (Jung et al., 2020) learns task-specific autoencoders and prevents changes in feature reconstruction. In CW-TaLaR (Mazur et al., 2022), we use the Cramer-Wold distance (Knop et al., 2020) between two probability distributions defined in a target layer of an underlying neural network shared by all tasks. Rehearsal-based approaches store information about data for training previous tasks and replay them to prevent catastrophic forgetting. In experience replay, we typically store a few old training samples within a small memory buffer. Reservoir Sampling (Riemer et al., 2018; Chaudhry et al., 2019) randomly selects a fixed number of old training samples obtained from each training batch. A Ring Buffer (Lopez-Paz & Ranzato, 2017) guarantees that the same amount of old training samples is present for each class. Mean-of-Feature (Rebuffi et al., 2017) selects a similar number of old training samples that are closest to the mean of the features of each class. In generative replay or pseudo-rehearsal, we train an additional generative model to replay generated data. DGR (Shin et al., 2017) provides an initial framework for data sampling from the old generative model to inherit previously learned knowledge. MeRGAN (Wu et al., 2018) enforces the consistency of the generated data with the same random noise between the old and new generative models, similar to the role of function regularization. Architecture-based approaches use dynamic architectures that dedicate separate model branches to different tasks. These branches can be developed incrementally, such as in the case of Progressive Neural Networks (Rusu et al., 2016). The architecture of a system can be optimized to enhance parameter effectiveness and knowledge transfer, for example, by reinforcement learning (RCL (Xu & Zhu, 2018), BNS (Qin et al., 2021)), architecture search (LitG (Li et al., 2019), BNS (Qin et al., 2021)), and variational Bayesian methods (BSA (Kumar et al., 2021)). Alternatively, a static architecture can be reused with iterative pruning as proposed by PackNet (Mallya & Lazebnik, 2018) or by the application of Supermasks (Wortsman et al., 2020). **Pruning-based Continual Learning** Most architecture-based methods use additional memory to obtain better performance of continual learners. In the pruning-based method, we build computationally efficient and memory-efficient strategies. CLNP (Golkar et al., 2019) freezes the most significant neurons for a given task. Then, we reinitialize weights that were not selected for future task training. Piggyback (Mallya et al., 2018) uses a pre-trained model and task-specific binary masks. This technique has limited knowledge transfer since we retrain the binary masks for each task. Consequently, the approach’s effectiveness largely depends on the caliber of the backbone model. HAT (Serra et al., 2018) uses task-specific learnable attention vectors to recognize significant weights for each task. LL-Tickets (Chen et al., 2020) show that we can find a subnetwork, referred to as lifelong tickets, that performs well on all tasks during continual learning. If the tickets cannot work on the new task, the method looks for more prominent tickets from the existing ones. However, the LL-Tickets expansion process is made up of a series of retraining and pruning steps. In Winning SubNetworks (WSN) (Kang et al., 2022), authors propose to jointly learn the model and task-adaptive binary masks dedicated to task-specific subnetworks (winning tickets). Unfortunately, WSN eliminates catastrophic forgetting by freezing the subnetwork weights for the previous tasks and memorizing masks for all tasks. This paper proposes the next step toward producing a sparse subnetwork for continual learning. Instead of the classical binary mask and freezing strategy, we use the hypernetwork paradigm. The hypernetwork generates a semi-binary mask to a target model based on the task embedding. **Hypernetworks for continual learning** A hypernetwork architecture (Ha et al., 2016) is a neural network that generates a vector of weights for a separate target network designated to solve a specific task. Hypernetworks are widely used, e.g., generative models (Spurek et al., 2020), implicit representation (Szatkowski et al., 2023) and few-shot learning (Sendera et al., 2023). In a continuous learning environment, a hypernetwork generates the weights of a target model based on the task’s identity. HNET [von Oswald et al., 2019] uses task embeddings to produce weights dedicated to each task. HNET can be seen as an architecture-based strategy as we create a distinct architecture for each task, but it can also be viewed as a regularization model. After training, a single meta-model is left, which produces specialized weights. Thanks to the possibility of generating completely different weights for each task, hypernetwork-based models demonstrate minimal forgetting. However, this advantage leads to difficulty with forward/backward transfers. Hypernetworks can generate different nests for tasks and solve them independently. Consequently, the hypernetwork may have problems using the previously learned knowledge to solve a new task. In Henning et al. (2021), authors propose a Bayesian version of the hypernetworks in which they produce parameters of the prior distribution of the Bayesian network. 3 HYPERMASK: ADAPTIVE HYPERNETWORKS FOR CONTINUAL LEARNING This section describes our hypernetwork-based continual learning method called HyperMask. In HyperMask, the hypernetwork returns semi-binary masks to produce weighted subnetworks dedicated to new tasks. This solution inherits the ability of the hypernetwork to adapt to new tasks with minimal forgetting. Moreover, we can use a single network with weighted subnets dedicated to each task thanks to the lottery ticket hypothesis. Problem statement Let us consider a supervised learning setup where \( T \) tasks are derived to a learner sequentially. We denote that \( X_t = \{x_{i,t}\}_{i=1}^{n_t} \) is the dataset for the task \( t \), composed of \( n_t \) elements of raw instances and \( Y_t = \{y_{i,t}\}_{i=1}^{n_t} \) are the corresponding labels. Data from all tasks we denote by \( D_t = (X_t, Y_t) \subset X \times Y \). We assume a neural network \( f(\cdot; \theta) \), parameterized by the model weights \( \theta \) and the standard continual learning scenario \[ \theta^* = \minimize_{\theta} \frac{1}{n_t} \sum_{i=1}^{n_t} L(f(x_{i,t}; \theta)), \] where \( L(\cdot, \cdot) \) is a classification objective loss such as the cross-entropy loss. \( D_t \) for task \( t \) is only accessible when learning task \( t \), but repetition-based continual learning methods allow memorization of a small portion of the dataset to replay. We further assume that task identity is given in both the training and testing stages, except for the additional series of experiments. To provide space for learning future tasks, a continuing learner often adopts over-parameterized deep neural networks. We can find subnets with equal or better performance assuming overly parametric depth neural networks. In our model, we use a hypernetwork paradigm to produce subnets. Hypernetwork Hypernetworks, introduced in Ha et al. (2016), are defined as neural models that generate weights for a separate target network solving a specific task. Before we present our solution, we describe the classical approach to using hypernetworks in CL. A hypernetwork generates individual weights for all tasks in a continual learning setting. In HNET [von Oswald et al., 2019; Henning et al., 2021], the authors propose using trainable embeddings \( e_t \in \mathbb{R}^N \), for \( t \in \{1, ..., T\} \), and the hypernetwork \( H \) with weights \( \Phi \) generating weights \( \theta_t \) for the target network \( f \) dedicated to the \( t \)-th task \[ H(e_t; \Phi) = \theta_t. \] HNET meta-architecture (hypernetwork) produces different weights for each continual learning task. We have the function \( f_{\theta_t} : X \rightarrow Y \) (a neural network classifier with weights \( \theta_t \)), which takes elements from a continuous learning dataset and predicts labels. The target network is not trained directly. In HNET, we use a hypernetwork \( H_\Phi : \mathbb{R}^N \ni e_t \rightarrow \theta_t \), which for a task embedding \( e_t \) returns weights \( \theta_t \) to the corresponding target network \( f_{\theta_t} : X \rightarrow Y \). Thus, each continual learning task is represented by a function (classifier) \[ f(\cdot; \theta_t) = f(\cdot; H(e_t; \Phi)). \] At the end of training, we have a single meta-model, which produces dedicated weights. Due to the ability to generate completely different weights for each task, hypernetwork-based models feature minimal forgetting. Hypernetworks can produce different nests for each task and solve them separately. We practically produce a new architecture when we update the prior task. To solve such a problem, we use the lottery ticket hypothesis, which postulates the existence of sparse subnetworks, named winning tickets, that preserve the performance of a full network. **Algorithm 1:** The pseudocode of HyperMask. **Input:** hypernetwork $H$ with weights $\Phi$, target network $f$ with weights $\theta$, sparsity $p \geq 0$, regularization strength $\beta > 0$, and $\lambda > 0$, $n$ training iterations, datasets $\{D_1, D_2, ..., D_T\}$, $(x_{i,t}, y_{i,t}) \in D_t, t \in \{1, ..., T\}$ **Output:** updated hypernetwork weights $\Phi$, updated target network weights $\theta$ 1. Initialize randomly weights $\Phi$ and $\theta$ with embeddings $\{e_1, e_2, ..., e_T\}$; 2. for $t \leftarrow 1$ to $T$ do 3. if $t > 1$ then 4. $\theta^* \leftarrow \theta$; 5. for $t' \leftarrow 1$ to $t - 1$ do 6. Store $m_{t'} \leftarrow H(e_{t'}, p; \Phi)$; 7. end 8. end 9. for $i \leftarrow 1$ to $n$ do 10. $m_t \leftarrow H(e_i, p; \Phi)$; 11. $\theta_t \leftarrow m_t \odot \theta$; 12. $\hat{y}_{i,t} \leftarrow f(x_{i,t}; \theta_t)$; 13. if $t = 1$ then 14. $L \leftarrow L_{current}$; 15. end 16. else 17. $L \leftarrow L_{current} + \beta \cdot L_{output} + \lambda \cdot L_{target}$; 18. end 19. Update $\Phi$ and $\theta$; 20. end 21. Store $e_t$; 22. end HyperMask uses trainable embeddings $e_t \in \mathbb{R}^N$ for $t \in \{1, ..., T\}$, threshold level $p$ and hypernetwork $H$ with weights $\Phi$ generating a semi-binary mask $m_t$ with $p\%$ zeros for the target network weights $\theta$ dedicated to each task $$H(e_t, p; \Phi) = \sigma_p(\cdot; H(e_t; \Phi)) = m_t;$$ $\sigma_p(\cdot; \cdot)$ means that the indicator function is applied for all values at the output of $H$. In HyperMask, we have two trainable architectures. Hypernetwork $H$ has trainable parameters $\Phi$, and the target network has trainable parameters $\theta$. Meta-architecture (hypernetwork) produces different semi-binary masks for each continual learning task. More precisely, we model the function $f_\theta : X \rightarrow Y$ with general weights $\theta$ dedicated to all tasks. The target network is trained with a classical cross-entropy cost function. We simultaneously train a hypernetwork $H_\Phi : \mathbb{R}^N \ni e_t \rightarrow m_t$, which for a task embedding $e_t$ returns semi-binary mask $m_t$ to the corresponding target network weights $\theta$. Thus, each continual learning task is represented by a function (classifier) $$f(\cdot; \theta \odot m_t) = f(\cdot; \theta \odot H(e_t, p; \Phi)),$$ where $\odot$ is element-wise multiplication. **HyperMask – overview** Now we are ready to present HyperMask. Our approach uses hypernetwork to produce semi-binary masks for the target network. We use the tanh activation function on the output of Hypernetwork. Then, we select the $p\%$ weights with the highest weight scores, where $p$ is the ratio of target layer capacity and $c(p, i, t; x)$ is a threshold value for the $i$-th iteration of the $t$-th task for a given network layer $x$ and $t \in \{1, ..., T\}$. The selection of weights are represented by a task-dependent semi-binary weight mask $m_t$, where an absolute value greater than the threshold denotes that the weight is taken into account during the forward pass and zero otherwise. Formally, $m_t$ is obtained by applying an indicator function $\sigma_p(\cdot; \cdot)$ to a weight $w$ which is an element of $x$ representing a single layer of the hypernetwork $H$ output $$\sigma_p(w; x) = \begin{cases} 0 & \text{if } |w| \leq c(p, i, t; x), \\ w & \text{otherwise}. \end{cases}$$ Additionally, the ratio $p$ is constant starting from the second task but, for the first trained task, is gradually increased from 0 to $p$ $$c(p, i, t; x) = \begin{cases} P(p; |x|) & \text{if } t > 1, \\ P\left(\frac{i}{n} \cdot p; |x|\right) & \text{otherwise}. \end{cases}$$ Each task is trained through $n$ iterations. The absolute value of consecutive weights of $x$ is calculated element-wise. $P(p; |x|)$ represents the $p$-th percentile of the set of absolute values of a given mask layer. In the training procedure, we have added two regularization terms. The first one is output regularizer proposed by Li & Hoiem (2017): $$L_{\text{output}} = \sum_{t=1}^{T-1} \sum_{i=1}^{|X_t|} \|f(x_{i,t}; \theta^* \odot m_t) - f(x_{i,t}; \theta \odot m_t)\|^2,$$ where $\theta^*$ is the set of the target network parameters before attempting to learn task $T$. This solution is not only expensive in terms of memory but also does not follow the online learning paradigm adequately. But hypernetworks von Oswald et al. (2019); Henning et al. (2021) avoid this problem. Task-conditioned hypernetworks produce an output depending on the task embedding. We can compare the fixed hypernetwork output produced before learning task $T$ with weights $\Phi^*$ with the output after a current proposition of hypernetwork weight modifications $\Delta \Phi$, according to the cross-entropy loss. The difference between HyperMask and von Oswald et al. (2019) relies on the fact that we just regularize masks dedicated to consecutive continual learning tasks and the target weights have to work in general, while von Oswald et al. (2019) regularize weights that are further directly placed in the target network. Finally, in HyperMask, the output regularization loss is given by: $$L_{\text{output}}(\Phi^*, \Phi, \Delta \Phi, \{e_t\}) = \frac{1}{T-1} \sum_{t=1}^{T-1} \|H(e_t, 0; \Phi^*) - H(e_t, 0; \Phi + \Delta \Phi)\|^2,$$ where $\Delta \Phi$ is considered fixed. We do not sparse the hypernetwork weights at this stage, i.e. $p = 0$. Table 1: Average accuracy with a standard deviation of different continual learning methods. We obtained better results than two of our main baselines: WSN and HNET. Moreover, we have the best results on CIFAR-100 and Tiny ImageNet and second scores in Permuted MNIST and Split MNIST. Results for different methods than HyperMask are derived from other papers. * — model trained on ResNet-20 architecture; ** — model trained on ZenkeNet architecture. | Method | Permuted MNIST | Split MNIST | Split CIFAR-100 | Tiny ImageNet | |--------------|----------------|-------------|-----------------|---------------| | HAT | 97.67 ± 0.02 | – | 72.06 ± 0.50 | – | | GPM | 94.96 ± 0.07 | – | 73.18 ± 0.52 | 67.39 ± 0.47 | | PackNet | 96.37 ± 0.04 | – | 72.39 ± 0.37 | 55.46 ± 1.22 | | SupSup | 96.31 ± 0.09 | – | 75.47 ± 0.30 | 59.60 ± 1.05 | | La-MaML | – | – | 71.37 ± 0.67 | 66.99 ± 1.65 | | FS-DGPM | – | – | 74.33 ± 0.31 | 70.41 ± 1.30 | | WSN, c = 3% | 94.84 ± 0.11 | – | 70.65 ± 0.36 | 68.72 ± 1.63 | | WSN, c = 5% | 95.65 ± 0.03 | – | 72.44 ± 0.27 | 71.22 ± 0.94 | | WSN, c = 10% | 96.14 ± 0.03 | – | 74.55 ± 0.47 | 71.96 ± 1.41 | | WSN, c = 30% | 96.41 ± 0.07 | – | 75.98 ± 0.68 | 70.92 ± 1.37 | | WSN, c = 50% | 96.24 ± 0.11 | – | 76.38 ± 0.34 | 69.06 ± 0.82 | | WSN, c = 70% | 96.29 ± 0.00 | – | – | – | | EWC | 95.96 ± 0.06 | 99.12 ± 0.11| 72.77 ± 0.45 | – | | SI | 94.75 ± 0.14 | 99.09 ± 0.15| – | – | | DGR | 97.51 ± 0.01 | 99.61 ± 0.02| – | – | | HNET+ENT | 97.57 ± 0.02 | 99.79 ± 0.01| – | – | | HyperMask (our) | 97.66 ± 0.04 | 99.64 ± 0.07| 77.34 ± 1.94* | 76.22 ± 1.06* | The final cost function consists of the classical cross-entropy $L_{\text{current}}$, output regularization $L_{\text{output}}$, and target layer regularization $L_{\text{target}}$: $$L = L_{\text{current}} + \beta \cdot L_{\text{output}} + \lambda \cdot L_{\text{target}},$$ where $\beta$ and $\lambda$ are hyperparameters that control the strength of regularization. 4 EXPERIMENTS Moreover, we have added classical $L^1$ regularization on the target network weights $$L_{\text{target}}(\theta^*_t, \theta_t) = \| \theta^*_t - \theta_t \|_1,$$ where $\theta^*_t$ is the set of target network parameters before attempting to learn task $T$. Optionally, we can multiply $L_{\text{target}}$ by the hypernetwork-generated mask (masked $L^1$) to ensure that the most important target network weights will not be drastically modified while the other ones will be more susceptible to modifications. In such a case $$L_{\text{target}}(\theta^*_t, \theta_t, m_t) = m_t \odot \| \theta^*_t - \theta_t \|_1.$$ During hyperparameter optimization, we compared two variants of $L_{\text{target}}$, i.e. masked and non-masked $L^1$. A conclusive choice is dependent on the considered dataset. Figure 2: Visualization of mean accuracy (with 95% confidence intervals) for Permutated MNIST for 10 and 100 tasks and Split MNIST for 5 tasks. The blue lines represent test accuracy calculated after training consecutive models, while the orange lines correspond to test accuracy after finishing all CL tasks. The decrease in accuracy for 10-task Permutated MNIST and Split MNIST is very small. In the Permutated MNIST 100-task case, the mean accuracy equals 95.92 ± 0.18. Figure 3: Visualization of a target network’s output classification layer activations in two scenarios. On the left hand, we used a target network weighted by a semi-binary mask (HyperMask). On the right side, we used only the target network without a mask produced by the hypernetwork. In the first case, data sample classes are separated; in the second case, only samples from the first task are distinguished. Baselines We compared our solution with two natural baselines: WSN [Kang et al., 2022] and HNET [von Oswald et al., 2019]. WSN used the lottery ticket hypothesis, while HNET used the hypernetwork paradigm. We also added a comparison with strong CL baselines from different categories. In particular, we used regularisation-based methods: HAT [Serra et al., 2018] and EWC [Kirkpatrick et al., 2017], rehearsal-based methods like GPM [Saha et al., 2020] and FS-DGPM [Deng et al., 2021], a pruning-based method like PackNet [Mallya & Lazebnik, 2018] and SupSup [Wortsman et al., 2020], and a meta learning approach like La-MAML [Gupta et al., 2020]. Experimental setting We used the experimental setting from WSN [Kang et al., 2022] and HNET [von Oswald et al., 2019]. We did not change the original architectures provided by the authors. Some results in the tables were directly taken from papers. Numerical comparison We evaluated our algorithm on four standard benchmark datasets: Permutated MNIST, Split MNIST, Split CIFAR-100, and TinyImageNet [Le & Yang, 2015]. In Tab. 1, we compared HyperMask with the state-of-the-art models. The most important conclusion is that we obtained better results than two of our main baselines: WSN and HNET. Moreover, we had the second score in Permutated MNIST and Split MNIST. In the case of Permutated MNIST, our exact result was equal to 97.664, so it was only 0.006 smaller than HAT. In the case of CIFAR-100, we had the best score when we used ResNet-20 and about 4% less for ZenkeNet. Using ResNet-20, we outperformed all reference methods in Tiny ImageNet by over 4%. However, in WSN, La-MaML and FS-DPGM, authors used an architecture with four convolutional and three fully-connected layers. Influence of semi-binary mask on classification task In this subsection, we show that the semi-binary mask of HyperMask helped the target network to discriminate classes in consecutive CL Architecture We used two-layered MLP with 100 neurons per layer for Permutated MNIST and Split MNIST. For Split CIFAR-100, we used ResNet-20 and ZenkeNet [Zenke et al., 2017] and for Tiny ImageNet we applied ResNet-20. tasks. To visualize such properties, we considered the Permuted MNIST dataset (results for other datasets we included in Appendix). We took the fully-trained model and collected activations of the classification layer of the target network. In Fig. 3, we present t-SNE two-dimensional embeddings obtained from the set of activations containing all data samples from 10 tasks. Values were calculated for an exemplary model that achieved 97.72% overall accuracy after 10 CL tasks. The results for a tandem hypernetwork and target network (like in HyperMask) are presented on the left side. On the right side is shown a situation in which a mask from the hypernetwork was not applied to the target network trained in HyperMask. In the first case, data sample classes are clearly separated; in the second case, only samples from the first task are distinguished. The remaining data samples form one cluster in the embedding space. Interestingly, data from the first task are separated from samples from all subsequent tasks, which indicates that the first task plays a special role for HyperMask. Forgetting of previous tasks The HNET models produce completely different weights for each task. In consequence, they demonstrate minimal forgetting. HyperMask models inherit such ability thanks to generating different masks for each task. To visualize such properties, we present in Fig. 2 mean accuracy (with 95% confidence intervals) for the best setting of HyperMask for ten tasks of the Permuted MNIST dataset (left side) and five tasks of the Split MNIST dataset (right side). The blue lines represent test accuracy calculated after training consecutive models, while the orange lines correspond to test accuracy after finishing all CL tasks. The decrease in accuracy is very small, and the confidence intervals almost overlap, suggesting a very limited negative backward transfer. In Fig. 3, we present a comparison of our HyperMask and HNET in terms of test accuracies for CL tasks after consecutive training sessions. Both methods suffer from performance drops only slightly. Interestingly, HyperMask preserves the accuracy on the first task even after training of many subsequent ones. It is clearly visible in Fig. 2 where results for 100-task Permuted MNIST are presented. Even after training of the next 99 tasks, HyperMask has similar test accuracy on the first task to the accuracy calculated just after its training. Then, a performance drop typical for continual learning methods may be observed. It may indicate that the tandem of hyper- and the target network is getting used to the first task which strongly affects the behavior of weights. Stability of HyperMask model HyperMask models have a similar number of hyperparameters as HNET. The most critical parameters are $\beta$ and $\lambda$, which control regularization strength. We also use a parameter describing the level of zeros in a semi-binary mask and we define whether masked or non-masked $L_1$ has to be used. Masked $L_1$ means that $L_{target}$ was multiplied by the hypernetwork-generated mask while non-masked $L_1$ denotes the opposite case. In Fig. 4, we present mean test accuracy (with 95% confidence intervals) for five runs of the selected architecture settings of HyperMask, for ten tasks of the Permuted MNIST dataset, calculated after training of all tasks. The presented results indicate that a small change in hyperparameters does not cause a performance drop. The blue line represents the best hyperparameter setting found. **Scenario with model’s task prediction** We also evaluated HyperMask in a scenario in which task identity is not directly given to the model but must be inferred by the network itself. Following von Oswald et al. (2019), we prepared a task inference method based on the entropy values. After training for all tasks, consecutive data samples were propagated through the hyper- and target network for different task embeddings. The task with the lowest entropy value of the classification layer’s output in the target network was selected for the final calculations. Then, the classifier decision for the corresponding embedding was considered. Table 2: Mean overall accuracy (in %) in a scenario where the model must recognize task identity. For HNET+ENT and HyperMask, the inference is made based on the entropy results. The presented results from methods different than HyperMask are derived from von Oswald et al. (2019). | Method | Permuted MNIST | Split MNIST | |------------|----------------|-------------| | HNET+ENT | 91.75 ± 0.21 | 69.48 ± 0.80| | EWC | 33.88 ± 0.49 | 19.96 ± 0.07| | SI | 29.31 ± 0.62 | 19.99 ± 0.06| | DGR | 96.38 ± 0.03 | 91.79 ± 0.32| | HyperMask | 90.31 ± 1.36 | 85.80 ± 3.08| For HyperMask, we also calculated mean task prediction accuracy, which is equal to 90.30 ± 1.56 for Permuted MNIST and 62.90 ± 5.83 for Split MNIST. The discussed scores indicate a potential of HyperMask for task inference approaches, i.e. with another neural network for task prediction. **Limitations and future works** One of the main limitations of HyperMask is the memory consumption due to the fact that the hypernetwork output layer must have the same number of neurons as the number of parameters in the target network. The chunking approach described in von Oswald et al. (2019), in which the target’s weight values are generated by the hypernetwork partially, was not adopted in HyperMask because it led to considerably worse results so far. However, this approach should be analyzed thoroughly and may bring positive future results. HyperMask may be considered in few-shot class incremental learning in which a model is trained in a large number of base samples and then a small portion of samples representing new classes is delivered to the model (Kang et al., 2023). Due to the high accuracy of HyperMask on the first task (despite many subsequent ones), our method may be very useful in this CL scenario. **5 CONCLUSION** We present HyperMask, a method that trains a single network for all tasks. The hypernetwork produces semi-binary masks to generate target subnetworks tailored to new tasks. This approach utilizes the hypernetwork’s capacity to adjust to new tasks with minimal forgetting. Also, due to the lottery ticket hypothesis, we can use a single network with weighted subnets devoted to each task. The experimental section shows that our model performs better than lottery ticket and hypernetwork-based continual learning models. We also obtained comparable results to the state-of-the-art methods. We applied our method for multilayer perceptions and convolutional neural networks working as classifiers. HyperMask also has a potential for application in strategies in which task identity has to be inferred by the method and is not known a priori. REFERENCES Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. Memory aware synapses: Learning what (not) to forget. In Proceedings of the European conference on computer vision (ECCV), pp. 139–154, 2018. Arslan Chaudhry, Marc’Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. Efficient lifelong learning with a-gem. In International Conference on Learning Representations, 2018. Arslan Chaudhry, Marcus Rohrbach, Mohamed Elhoseiny, Thalaiyasingam Ajanthan, Puneet K Dokania, Philip HS Torr, and Marc’Aurelio Ranzato. On tiny episodic memories in continual learning. arXiv preprint arXiv:1902.10486, 2019. Arslan Chaudhry, Naeemullah Khan, Puneet Dokania, and Philip Torr. Continual learning in low-rank orthogonal subspaces. Advances in Neural Information Processing Systems, 33:9900–9911, 2020. Tianlong Chen, Zhenyu Zhang, Sijia Liu, Shiyu Chang, and Zhangyang Wang. Long live the lottery: The existence of winning tickets in lifelong learning. In International Conference on Learning Representations, 2020. Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Aleš Leonardis, Gregory Slabaugh, and Tinne Tuytelaars. A continual learning survey: Defying forgetting in classification tasks. IEEE transactions on pattern analysis and machine intelligence, 44(7):3366–3385, 2021. Danruo Deng, Guangyong Chen, Jianye Hao, Qiong Wang, and Pheng-Ann Heng. Flattening sharpness for dynamic gradient projection memory benefits continual learning. Advances in Neural Information Processing Systems, 34:18710–18721, 2021. Prithviraj Dhar, Rajat Vikram Singh, Kuan-Chuan Peng, Ziyi Wu, and Rama Chellappa. Learning without memorizing. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5138–5146, 2019. Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In International Conference on Learning Representations, 2018. Siavash Golkar, Michael Kagan, and Kyunghyun Cho. Continual learning via neural pruning. arXiv preprint arXiv:1903.04476, 2019. Gunshi Gupta, Karmesh Yadav, and Liam Paull. La-maml: Look-ahead meta learning for continual learning. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS’20, Red Hook, NY, USA, 2020. Curran Associates Inc. ISBN 9781713829546. David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016. Christian Henning, Maria Cervera, Francesco D’Angelo, Johannes Von Oswald, Regina Traber, Benjamin Ehret, Sejin Kobayashi, Benjamin F Grewe, and João Sacramento. Posterior meta-replay for continual learning. Advances in Neural Information Processing Systems, 34:14135–14149, 2021. Yen-Chang Hsu, Yen-Cheng Liu, Anita Ramasamy, and Zsolt Kira. Re-evaluating continual learning scenarios: A categorization and case for strong baselines. arXiv preprint arXiv:1810.12488, 2018. Sangwon Jung, Hongjoon Ahn, Sungmin Cha, and Taesup Moon. Continual learning with node-importance based adaptive group sparse regularization. Advances in neural information processing systems, 33:3647–3658, 2020. Haeyong Kang, Rusty John Lloyd Mina, Sultan Rizky Hikmawan Madjid, Jaehong Yoon, Mark Hasegawa-Johnson, Sung Ju Hwang, and Chang D Yoo. Forget-free continual learning with winning subnetworks. In International Conference on Machine Learning, pp. 10734–10750. PMLR, 2022. Haeyong Kang, Jaehong Yoon, Sultan Rizky Hikmawan Madjid, Sung Ju Hwang, and Chang D. Yoo. On the soft-subnetwork for few-shot class incremental learning. In The Eleventh International Conference on Learning Representations, 2023.
IAkflJmNrC
What is the reference model for semantic calculation used in sec 3.3 and sec 4? If I am understanding correctly, in section 4, you are using the base sentence embedding model without further finetuning as the reference model. Is it the same in sec 3.3?
Polarity-Aware Semantic Retrieval with Fine-Tuned Sentence Embeddings Anonymous authors Paper under double-blind review Abstract This paper investigates the effectiveness of retrieving sentences with multiple objectives – polarity and similarity – by fine-tuning sentence-transformer models on augmented supervised data. We establish two opposing metrics, namely Polarity Score and Semantic Similarity Score, for evaluation purposes. These are used in a test suite with various lightweight sentence-transformer models, hyperparameters and loss functions. Experiments are conducted on two binary classification problems from different domains: the SST-2 dataset for sentiment analysis and the detection of sarcastic news headlines. Addressing the catastrophic forgetting problem, our results show that the configuration of loss functions drastically alters a model’s capability to retain similarity while simultaneously differentiating on classes from supervised data. These findings indicate that we can 1) improve upon generalized sentence embeddings for information retrieval and 2) increase interpretability of sentence embeddings by studying their adaptability to different domains. 1 Introduction In the rapidly evolving field of Natural Language Processing, the tasks of text classification and semantic textual similarity (STS) are well established and have countless use cases. While rule-based, statistical and deep learning models for both tasks have been successful throughout the years (Tai et al., 2015; Minaee et al., 2021; Li et al., 2022), newer contextual word representations and transformer models have now become the de-facto standard (Joulin et al., 2017; Howard & Ruder, 2018; Devlin et al., 2019; Yang et al., 2019; Raffel et al., 2020). Sentence embeddings have also shown great promise for STS (Reimers & Gurevych, 2019), often trained by contrastive learning (Chuang et al., 2022; Gao et al., 2022). Research suggests these procedures are effective with much less data than previously needed for end-to-end models, as shown with few-shot training examples in SetFit (Tunstall et al., 2022). By incorporating classification into the data sources for sentence-transformers and adjusting the training configurations, we study the capability of restructuring the embedding space throughout fine-tuning to capture both sentences of the same polarity and of high semantic similarity. This scheme also allows for standard classification by considering the labels of retrieved similar sentences in the training data. For evaluation, we establish two metrics: Polarity Score, which measures the classification performance, and Semantic Similarity Score, which quantifies the semantic closeness of texts compared to a reference model. These metrics allow us to closely interpret the behavior of the resulting semantic space in different domains, addressing the problem of catastrophic forgetting during fine-tuning (Goodfellow et al., 2015; Opitz & Frank, 2022). Experiments are conducted on two datasets: 1) SST-2, Stanford Sentiment Treebank (Socher et al., 2013), a binary sentiment dataset on full sentences, and 2) A dataset with sarcastic news headlines (Misra & Arora, 2023). The remainder of this paper is structured as follows: Section 2 discusses related work. Section 3 introduces the datasets, metrics, models and training details. Section 4 presents experimental results and Section 5 discussions. Finally, conclusions and future work are described in Section 6. 2 Related Work Related research is largely based on developments within word and sentence embeddings. Commonly used embedding techniques include word2vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014), and ELMo (Peters et al., 2018). In the realm of sentence embeddings, early methods involved concatenation and aggregation of word embeddings to produce a sentence representation (Le & Mikolov, 2014; Joulin et al., 2017). However, more recent research has focused on developing specialized models to encode sentence representations, as exemplified by systems like InferSent (Conneau et al., 2017), universal sentence encoder (Yang et al., 2020), sentence-transformers (SBERT) (Reimers & Gurevych, 2019) and SimCSE (Gao et al., 2022). SBERT is trained using a pre-trained BERT model to learn the representations of a given sentence. While techniques and setups vary, an example of a training procedure is by providing triplets forming \((\text{anchor sentence}, \text{positive}, \text{negative})\), where the model attempts to maximize the distance between the anchor and the negative (dissimilar sentence), while minimizing the distance between the anchor and the positive (similar) sentence. This methodology provided efficient models for STS (Agirre et al., 2013; Reimers & Gurevych, 2019; Gao et al., 2022; Tunstall et al., 2022; Wang et al., 2022; Li et al., 2023). Several datasets and benchmarks have been published for STS since the SemEval shared task (Agirre et al., 2013), including the STS Benchmark (Cer et al., 2017), SICK (Marelli et al., 2014), and BIOSSES (Soğancıoğlu et al., 2017), all of which are now found in the Massive Text Embedding Benchmark (MTEB) (Muenninghoff et al., 2022). Transformer models have excelled at the task, as is shown in the tables on HuggingFace’s leaderboard for the evaluation.\footnote{https://huggingface.co/spaces/mteb/leaderboard} Currently, the General Text Embeddings model (Li et al., 2023) receives the highest scores. The work by Opitz & Frank (2022) is highly related to interpretability for multiple objectives, where the authors create a set of sub-embeddings for features such as negation and semantic roles, addressing the problem catastrophic forgetting (Goodfellow et al., 2015). This problem has been further studied in detail by Chen et al. (2020), adjusting the mechanisms behind the Adam optimizer (Kingma & Ba, 2017), and Luo et al. (2023), a study describing the forgetting effect during fine-tuning of large language models on various key features like domain knowledge and reasoning. In this work, however, the focus is shifted towards understanding the embedding space for specific domains by augmenting the data sources directly and adjusting the parameters behind the loss functions. 3 Methods and Data This section includes information on datasets, evaluation metrics, baseline models, loss functions, data generation, and the fine-tuning pipeline. We use two sources for classification evaluation. The modeling scheme is generalized to any data source for binary classification. **SST-2** The Stanford Sentiment Treebank (Socher et al., 2013) is commonly used for binary classification tasks and is implemented in the GLUE benchmark (Wang et al., 2019). It consists of a train/test/validation split with 67,349/1821/872 samples respectively. However, the labels for the test split are hidden and can only be evaluated by submissions to GLUE.\footnote{https://gluebenchmark.com/leaderboard} As our system is not aimed at the broad range of tasks present in GLUE, we evaluate using the available validation split, for which our system achieves an accuracy of 93.23. A high classification score is not the purpose of this work and is merely an indicator of how retrieved similar sentences can be used to infer the label of an unseen sentence. **Sarcastic Headlines** The “News Headlines Dataset for Sarcasm Detection” (Misra & Arora, 2023) contains 28,619 news headlines from HuffPost (non-sarcastic) and The Onion (sarcastic). Misra & Arora claims this to guarantee high quality labels. Furthermore, headlines are primarily self-contained and do not rely on additional context, thus well suited for evaluating both similarity and polarity. Retrieving similar sarcastic sentences to produce labels for the test set gives an accuracy of 92.27, outperforming the models presented by Amin et al. (2023). 3.1 Evaluation For a sentence \( s \), we retrieve the \( k \) nearest neighbours with a model \( M \), denoted \( s_1^M, \ldots, s_k^M \). These are evaluated on the criteria of polarity and semantic similarity. 3.1.1 Polarity Score To measure whether a model favors texts of the same polarity as the input in its predictions, we compute a weighted average polarity score over the \( k \) predictions depending on the polarity of \( s \). Formally, for a sentence \( s \), this can be expressed as: \[ P_M(s) := \sum_{i=1}^{k} w_i \cdot \text{pol}(s_i^M) \quad \text{where} \quad \text{pol}(s_i^M) := \begin{cases} 1 & \text{if } s \text{ and } s_i^M \text{ have the same polarity}, \\ 0 & \text{otherwise}. \end{cases} \] (1) The weights \( w_i \) can be chosen to reflect the importance of ranked suggestions. Instead of averaging them, we choose a linear discounting model where the \( i \)-th suggestion is scaled by a factor of \( k + 1 - i \). By normalization, we get weights \( w_i := \frac{2(k+1-i)}{k(k+1)} \). If the predictions are mostly of the same polarity as the input, this is reflected in a value close to one. In any case, we would expect fine-tuned models to be better at predicting sentences of the same polarity than the pre-trained baseline, or reference, model. 3.1.2 Semantic Similarity Score In assessing the quality of predicted sentences, simply aligning their polarity with the input is insufficient. We necessitate a metric to gauge the semantic similarity: the weighted average cosine similarity between the predictions from a model \( M \) and their corresponding embeddings under a baseline reference model \( R \), pre-trained for semantic similarity. As the fine-tuned model will increase its internal representation of similarity within its embeddings throughout training, it is necessary to compare similarity with a reference model. The cosine similarity is defined as \( \cos_{\text{sim}}(s_1, s_2) := \frac{x_1 \cdot x_2}{||x_1|| \cdot ||x_2||} \), where \( x_i \) is the vector for sentence \( s_i \). For a model \( M \), we compute the Semantic Similarity Score \( S_M \) for a sentence \( s \): \[ S_M(s) := \sum_{i=1}^{k} w_i \cdot \cos_{\text{sim}}(s, s_i^M) \] (2) The weights \( w_i \) are reused from the Polarity metric, as defined in Section 3.1.1. If the predicted sentences from model \( M \) remain semantically similar to the input sentence, we should observe that \( S_M(s) \) is equal or slightly lower than the reference \( S_R(s) \). 3.2 Baseline Models The models in Table 1 are selected based on varying complexity, but more importantly, performance versus size and inference time. Data is sourced from the MTEB leaderboard (Muennighoff et al., 2022). We select the commonly used sentence-transformer model, all-MiniLM-L6-v2 (Reimers & Gurevych, 2019) – referred to as MiniLM-6, along with the better performing models GTE-base/small (Li et al., 2023) and the E5-small-v2 (Wang et al., 2022). We use the entire test sets for target embeddings, and select a lookup sample of five times the size of the test set as source embeddings – keeping an even ratio between datasets. Inspecting the importance of \( k \) for each model shows a near static relationship between the models (see Figure 1), where performance drops slightly for higher values of \( k \), as can be expected when forcing the model to retrieve more sentences. We select \( k = 16 \) for further experiments, as a reasonable number for retrieval and inspection, as well as to reduce the number of computations. Although performance is generally high for all values, the E5-small model achieves the highest scores. Conversely, the miniml-6 performs the worst, with especially low scores for \( S \). All models are used for continued evaluations. Table 1: Sentence-transformer baseline model selection and performance ($k = 16$) for polarity and semantic similarity on SST-2 and sarcastic headlines. Standard deviation subscripted. | Model | Size MB | Embedding dimension | STSBenchmark reported avg | SST-2 $P$ | SST-2 $S$ | Sarcasm $P$ | Sarcasm $S$ | |-------------|---------|---------------------|---------------------------|----------|-----------|-------------|-------------| | E5-small-v2 | 130 | 768 | 85.95 | 81.5$_{23.7}$ | 85.5$_{1.7}$ | 71.4$_{21.2}$ | 83.4$_{1.5}$ | | GTE-base | 220 | 768 | 85.73 | 80.4$_{22.6}$ | 83.7$_{1.4}$ | 67.4$_{20.7}$ | 81.4$_{1.6}$ | | GTE-small | 70 | 384 | 85.57 | 77.8$_{22.2}$ | 84.8$_{1.4}$ | 66.8$_{20.6}$ | 82.5$_{1.6}$ | | MiniLM-6 | 90 | 384 | 82.03 | 63.0$_{21.9}$ | 46.6$_{7.4}$ | 63.8$_{20.2}$ | 42.3$_{5.6}$ | Figure 1: Baseline models with average performance across both datasets when retrieving the $k$ nearest matches. ### 3.3 Loss Functions for Sentence Embeddings To assess the quality of sentence embeddings, models are trained with different loss functions depending on the desired properties for downstream tasks. The Sentence-Transformers library provides a wide range of predefined loss functions.\footnote{We encourage the interested reader to study the loss functions at \url{https://www.sbert.net/docs/package_reference/losses.html}} However, not all losses provide the desired flexibility for supporting our constraints of multiple objectives. There are four batching triplet losses, all of which generate every valid combination of triplets, typically creating a far too large dataset. We describe triplets and their generation constraints in Section 3.4. Further, the *DenoisingAutoEncoderLoss* adds noise and reconstructs the original sentences. This process is suitable for unsupervised training, but not for our application of encoding polarity within the embeddings. The same limitation holds for *MSELoss*, which uses the MSE loss between a target and source, with no relation to it being positive or negative. *MegaBatchMarginLoss* finds the least similar pair between an anchor and a sentence of the same polarity. As our similarity scores are not gold labels, we find this loss incompatible. *MarginMSELoss* requires a gold similarity score between a query and a positive/negative value, which we do not have. *CosineSimilarityLoss* considers the similarity between pairs of sentences. This is the very basis for the augmentation of datasets to begin with, as we have ensured a threshold of similarity between the sentences of equal polarity. However, this loss is the default for SetFit (Tunstall et al., 2022), which we use in our comparisons. The *SoftMaxLoss* was in Reimers & Gurevych (2019) used to train models on NLI data (Williams et al., 2018), adding a softmax classifier on the output, compatible with multiple classes. However, it does not provide a clear distinction of similarity. After filtering, we employ a set of four loss functions: TripletLoss (Schroff et al., 2015), MultipleNegativesRankingLoss (Henderson et al., 2017), OnlineContrastiveLoss and ContrastiveLoss (Hadsell et al., 2006). These have varying data inputs related to how the model assesses the similarity between input sentences. All models support a similarity function, for which we use the cosine similarity. They are described in more detail below. **TripletLoss** consists of triplets of sentences \((A, P, N)\) where \(A\) is the “anchor”, \(P\) is similar to the anchor, and \(N\) is dissimilar. In context of binary classification, \(P\) is attributed to the label 1, and \(N\) label 0. The loss is then expressed as: \[ \max(|\text{emb}(A) - \text{emb}(P)| - |\text{emb}(A) - \text{emb}(N)| + \lambda, 0), \] where \(\lambda\) is the margin, specifying the minimum separation between \(A\) and \(N\). **MultipleNegativesRankingLoss** consists of sentence pairs, assuming \((a_i, p_i)\) pairs as positive and \((a_i, p_j)\) pairs for \(i \neq j\) as negatives. It calculates the loss by minimizing the negative log-likelihood for softmax-normalized scores, encouraging positive pairs to have higher similarity scores than negative pairs. **(Online)ContrastiveLoss** consists of \(\{0, 1\}\)-labelled tuples (Anchor, Sentence) where the label indicates whether \(|\text{emb}(A) - \text{emb}(S)|\) is to be maximized, indicating dissimilarity (0) or minimized, indicating similarity (1). In the online variant, the loss is only calculated for strictly positive or negative pairs, reported to generally perform better (Tunstall et al. 2022). The margin parameter \(\lambda\) controls how far dissimilar pairs need to be separated. For each compatible loss function, we select various margin values (Table 2) in order to study the models’ behavior. | Loss function | \(\lambda\) margin | \(\lambda\) default | |-------------------------------|--------------------|---------------------| | Triplet Loss | \{0.01, 0.1, 1.0, 5.0, 7.5, 10\} | 5.0 | | Multiple Negatives Ranking Loss | – | – | | Contrastive Loss | \{0.1, 0.25, 0.5, 0.75, 1.0\} | 0.5 | | Online Contrastive Loss | \{0.1, 0.25, 0.5, 0.75, 1.0\} | 0.5 | ### 3.4 Data generation Different loss functions require different data inputs. To speed up data sampling when training, we precompute datasets corresponding to each input type: 1) Triplet, 2) Contrastive, and 3) MultipleNegatives, referred to as *example generation*. Original data is encoded using a sentence-transformer model, from which an index is built. For each (sentence, label) pair in the data, we compute the \(k\) nearest neighbors of each polarity, requiring a minimum semantic similarity threshold of \(\geq 0.5\), resulting in semantically similar pairs for each label. These pairs are then combined according to the selection of loss functions, e.g., with a TripletLoss requiring (anchor, similar, dissimilar). As this process creates a mapping between every source sentence to the \(k\) similar sentences, we control the data generation size by introducing a dropout, tuned to generate roughly 250,000 examples for each loss function, the highest number we can reach when normalizing the sample count across all configurations.\(^4\) See Figure 2 for an illustration of the example generation process. Data examples for each loss type are listed in Table 3. ### 3.5 Fine-tuning The generated examples, described in Section 3.4, is the input to each model configuration, forming the basis for fine-tuned models. From the selected dataset, we fetch the generated examples corresponding to the loss function, from which \(N\) are resampled each training step. The dataset, in its original form, is passed to the reference model as well as the fine-tuned model after each training step, from which an index is computed to retrieve the \(k\) closest matches in the training samples for each sample in the test split, used to compute the scores for polarity and semantic similarity. --- \(^4\)Data generation for MultipleNegativesRankingLoss with 0 dropout for the smallest dataset (sarcastic headlines, 22,000 samples) produces 243,793 examples. Figure 2: The example generation process. Table 3: Data samples from SST-2 for the different loss function categories. | Loss type | Data sample | Data type | |-----------|------------------------------------------------------------------------------|-----------| | Triplet | **Anchor:** Totally unexpected directions | Triple | | | **Similar+Same polarity:** Dramatically moving | | | | **Similar+Opposite polarity:** Utterly misplaced | | | Multiple | **Anchor:** Bring new energy | Tuple | | Negatives | **Similar+Same polarity:** Juiced with enough energy and excitement | | | Contrastive| **Anchor:** Is a movie that deserves recommendation | Tuple + Label | | | **Similar:** Effort to watch this movie | | | | **Label:** 0 (increase distance → make less similar) | | | | **Anchor:** Of the jokes, most at women’s expense | | | | **Similar:** Dumb gags, anatomical humor | | | | **Label:** 1 (reduce distance → make more similar) | | 4 Experiments and Results The results are based around fine-tuning and continuous evaluation of the baseline models in different setups for loss functions and corresponding parameters. From available literature, fine-tuning transformers between 1 to 3 epochs seems sufficient in many cases [Gao et al., 2022]. Beyond this, we observe smaller improvements – but no signs of overfitting. To decide on a suitable number of training samples (in the range [50, 100000]) for further experiments, we study the differences between models after 5 epochs. Despite the reported effectiveness of few-shot learning for sentence-transformers [Tunstall et al., 2022], we observe improvements in polarity when increasing the sample size far beyond the scope of few-shot learning. Table 4 illustrates this behavior, aggregated across all models and loss configurations. While the polarity score $P$ increases, the semantic similarity score $S$ takes a slight hit throughout training. The latter is to be expected because we fine-tune the embedding only based on polarity labels. However, the reduction of $S$ is far lower than the increase in $P$. Observe the growing distance between the min and max scores for $S$. This distance indicates that certain model and loss configurations perform vastly better (or worse) for our joint task, and is the basis for our hypothesis that we can balance both, despite the apparent trade-off. This is further supported by the relatively small changes to the standard deviation. Figure 3 shows an increasing number of outliers as the embedding space is shifted towards polarity, which we aim to minimize with training configurations. To investigate possible configurations, while accounting for computational efficiency, we continue by setting the sample size $N = 50,000$ and perform detailed experiments on the aforementioned loss functions with their $\lambda$ margins on both datasets. Details on training and configurations are found in Appendix A. Table 4: Aggregated scores across all configurations for different sample sizes after 5 epochs on the validation split of the SST-2 dataset. | Samples | Polarity Score | Semantic Similarity Score | |---------|----------------|---------------------------| | | Mean | σ | Min | Max | Mean | σ | Min | Max | | 50 | 75.7 | 7.5 | 63.0 | 81.5 | 75.1 | 16.6 | 46.6 | 85.5 | | 500 | 75.7 | 7.5 | 63.0 | 81.5 | 75.1 | 16.6 | 46.6 | 85.5 | | 2000 | 75.7 | 7.5 | 62.9 | 81.7 | 75.1 | 16.6 | 46.6 | 85.5 | | 5000 | 76.3 | 7.7 | 63.1 | 83.1 | 75.1 | 16.6 | 46.5 | 85.5 | | 10000 | 78.0 | 8.3 | 63.2 | 87.3 | 74.9 | 16.8 | 45.7 | 85.4 | | 20000 | 81.5 | 8.7 | 61.8 | 89.2 | 73.0 | 18.3 | 36.4 | 84.9 | | 50000 | 86.2 | 6.4 | 68.0 | 92.5 | 70.2 | 21.3 | 29.6 | 84.7 | | 100000 | 88.9 | 4.0 | 72.2 | 93.4 | 69.3 | 22.3 | 29.0 | 84.6 | Figure 3: Box plot of polarity- and semantic similarity scores for each sample size on the SST-2 dataset. Tables 5 and 6 show the polarity and semantic similarity scores obtained after the continued training with \( N = 50,000 \) samples. The tables are organized to showcase the impact of the different loss functions and their \( \lambda \) margins. Best scores are shown in boldface, with the reference model, the setfit baseline, and the best performing model(s) highlighted. Note that for semantic similarity, we boldface the top two highest scores, as the MultipleNegative ranking loss – although seemingly performing strongly on the task – does so due to minimal adaptation to the new training samples, with similar performance to the respective baseline models. This can be confirmed by inspecting its polarity scores. 5 DISCUSSION Most model configurations adjusted the embeddings towards correct polarity upon fine-tuning. However, the minilm-6 falls short of its semantic similarity capabilities, while the remaining models seem to learn both tasks, with only slight differences between configurations. The TripletLoss stands out as the best-performing loss function, especially for smaller margins, with \( \lambda \in \{0.01, 0.10\} \), strongly outperforming the default value of 5.0. The earlier referenced statement on OnlineContrastiveLoss generally performing better than ContrastiveLoss holds for most experiments.\(^4\) For the ContrastiveLoss configurations, the default \( \lambda \) value of 0.5 seems well suited for the tasks, with minimal changes for different margins. MultipleNegativesRankingLoss is an outlier in both results. This is likely attributed to poor example generation for this particular loss function. MultipleNegatives treats sentences from distinct sentence pairs as dissimilar. In our case, we generate multiple similar pairs with the same first sentence, resulting in contradictory examples. This problem does not arise for any of the other loss functions. The key takeaway is that the implicit relations between distinct training examples severely restrict our flexibility in example generation. Hence, \(^4\)https://www.sbert.net/docs/package_reference/losses.html#onlinecontrastiveloss Table 5: Polarity scores for all loss configurations after 5 epochs with $N = 50,000$ samples, retrieving $k = 16$ sentences. | Loss | $\lambda$ | e5-small | gte-base | gte-small | minilm-6 | |---------------|-----------|----------|----------|-----------|----------| | | | sarcastic | sst2 | sarcastic | sst2 | sarcastic | sst2 | | Reference | - | 71.42±2.1 | 81.52±3.7 | 67.42±0.7 | 80.42±2.6 | 66.82±0.6 | 77.82±2.2 | 63.7±0.2 | 60.21±1.9 | | Cosine (SetFit) | 0.10 | 85.2±3.4 | 86.2±2.4 | 82.1±3.8 | 85.6±2.5 | 82.8±3.4 | 84.2±3.9 | 79.5±2.0 | 77.9±2.9 | | Contrastive | 0.25 | 88.8±3.4 | 89.5±2.3 | 86.9±3.6 | 89.2±2.1 | 81.9±2.0 | 88.0±2.5 | 75.9±2.6 | 68.0±2.4 | | Contrastive | 0.50 | 89.8±3.1 | 90.7±2.3 | 88.2±3.1 | 90.0±2.5 | 84.3±2.9 | 88.8±2.1 | 76.8±2.4 | 72.4±2.6 | | Contrastive | 0.75 | 89.9±3.1 | 91.6±2.3 | 88.9±3.6 | 90.6±2.5 | 86.8±2.5 | 89.1±2.7 | 77.8±2.2 | 75.1±2.8 | | Contrastive | 1.00 | 89.8±3.5 | 91.2±2.4 | 88.7±3.7 | 90.7±2.5 | 87.7±3.3 | 89.5±2.6 | 79.0±2.7 | 77.3±2.8 | | MultipleNeg | - | 73.6±2.9 | 80.8±2.4 | 73.1±2.4 | 81.8±2.5 | 72.0±2.6 | 80.6±2.4 | 69.0±2.0 | 69.4±2.1 | | OnlineContr | 0.10 | 89.6±2.7 | 90.4±2.7 | 87.4±2.8 | 89.5±2.2 | 82.6±2.0 | 88.2±2.8 | 78.9±2.0 | 78.2±2.5 | | OnlineContr | 0.25 | 90.0±2.5 | 91.5±2.8 | 88.2±2.4 | 90.2±2.5 | 84.4±2.3 | 88.9±2.6 | 78.9±2.4 | 74.1±2.7 | | OnlineContr | 0.50 | 89.7±2.9 | 91.6±2.4 | 88.2±2.7 | 90.6±2.6 | 86.0±2.6 | 89.2±2.7 | 79.0±2.9 | 75.2±2.7 | | OnlineContr | 0.75 | 89.5±2.6 | 91.7±2.4 | 88.6±2.7 | 90.8±2.5 | 87.2±2.9 | 89.2±2.7 | 80.0±2.7 | 77.5±2.8 | | OnlineContr | 1.00 | 89.6±2.6 | 91.7±2.5 | 88.3±2.7 | 90.7±2.6 | 87.5±2.7 | 89.6±2.7 | 80.5±2.8 | 78.4±2.7 | | Triplet | 0.01 | 90.2±2.6 | 91.5±2.5 | 82.5±2.9 | 90.3±2.9 | 84.0±2.5 | 89.1±2.6 | 78.5±2.5 | 76.1±2.9 | | Triplet | 0.10 | 90.6±2.6 | 91.9±2.6 | 89.7±2.7 | 91.2±2.6 | 88.4±2.7 | 89.2±2.7 | 83.5±2.9 | 80.8±2.6 | | Triplet | 1.00 | 90.1±2.5 | 90.9±2.5 | 88.4±2.6 | 90.6±2.5 | 87.4±2.6 | 88.6±2.7 | 84.1±2.6 | 83.2±3.1 | | Triplet | 5.00 | 88.2±2.5 | 89.3±2.4 | 86.5±2.6 | 90.1±2.5 | 84.9±2.6 | 88.2±2.6 | 81.5±2.7 | 81.3±3.0 | | Triplet | 7.50 | 88.2±2.5 | 89.6±2.1 | 86.6±2.7 | 90.1±2.5 | 84.8±2.6 | 88.2±2.5 | 81.4±2.8 | 81.5±3.0 | | Triplet | 10.00 | 88.1±2.5 | 89.6±2.2 | 86.8±2.6 | 90.2±2.9 | 84.8±2.6 | 88.1±2.6 | 81.6±2.8 | 81.2±3.0 | Average | - | 88.5 | 90.3 | 86.8 | 89.8 | 84.9 | 88.4 | 79.2 | 76.6 | Table 6: Semantic similarity scores for all loss configurations after 5 epochs with $N = 50,000$ samples, retrieving $k = 16$ sentences. | Loss | $\lambda$ | e5-small | gte-base | gte-small | minilm-6 | |---------------|-----------|----------|----------|-----------|----------| | | | sarcastic | sst2 | sarcastic | sst2 | sarcastic | sst2 | | Reference | - | 83.4±1.5 | 85.5±1.7 | 81.4±1.6 | 83.7±1.4 | 82.5±1.6 | 84.8±1.4 | 42.3±6.6 | 46.6±7.4 | | Cosine (SetFit) | 0.10 | 78.5±1.2 | 81.6±1.1 | 75.6±0.9 | 79.9±1.8 | 75.6±1.5 | 80.7±1.8 | 17.8±5.6 | 27.1±6.9 | | Contrastive | 0.25 | 79.4±2.0 | 83.3±2.0 | 75.0±1.4 | 81.0±1.8 | 78.7±1.1 | 82.1±1.8 | 25.6±6.6 | 34.8±7.2 | | Contrastive | 0.50 | 79.7±2.0 | 83.7±1.9 | 76.2±1.4 | 81.4±1.8 | 79.0±1.1 | 82.6±1.7 | 26.6±6.6 | 34.5±6.8 | | Contrastive | 0.75 | 79.8±2.0 | 83.8±1.9 | 76.5±1.6 | 81.5±1.7 | 79.1±1.2 | 82.8±1.6 | 27.1±6.6 | 34.2±6.7 | | Contrastive | 1.00 | 79.8±2.0 | 83.7±1.9 | 76.5±1.7 | 81.3±1.7 | 78.1±1.2 | 82.4±1.6 | 27.8±5.5 | 33.9±6.6 | | MultipleNeg | - | 82.5±1.6 | 84.7±1.8 | 80.4±1.8 | 82.5±1.6 | 81.6±1.8 | 83.9±1.6 | 39.9±1.6 | 35.7±1.8 | | OnlineContr | 0.10 | 80.1±1.9 | 83.8±1.9 | 75.6±1.3 | 81.2±1.8 | 79.2±1.0 | 82.5±1.7 | 25.4±6.7 | 33.2±7.1 | | OnlineContr | 0.25 | 80.5±1.9 | 84.1±1.9 | 77.1±1.3 | 81.7±1.8 | 79.7±1.0 | 82.9±1.7 | 27.1±6.6 | 33.0±6.9 | | OnlineContr | 0.50 | 80.6±1.9 | 84.1±1.9 | 77.8±1.3 | 82.0±1.6 | 79.9±1.0 | 83.0±1.6 | 28.3±6.5 | 33.9±7.0 | | OnlineContr | 0.75 | 80.6±1.9 | 84.0±1.9 | 77.5±1.3 | 81.9±1.6 | 79.4±1.2 | 82.9±1.6 | 28.5±6.5 | 34.5±7.0 | | OnlineContr | 1.00 | 80.6±1.9 | 84.0±1.9 | 77.4±1.6 | 81.7±1.6 | 78.9±1.3 | 82.7±1.6 | 29.2±6.4 | 35.0±7.0 | | Triplet | 0.01 | 81.2±1.8 | 83.8±2.0 | 78.0±1.4 | 81.9±1.7 | 79.9±1.0 | 83.0±1.7 | 25.8±6.2 | 33.9±7.3 | | Triplet | 0.10 | 81.3±1.7 | 83.7±1.9 | 78.1±1.3 | 81.9±1.7 | 79.9±1.2 | 83.0±1.6 | 30.5±6.1 | 35.2±7.3 | | Triplet | 1.00 | 79.2±1.1 | 82.8±1.1 | 76.3±0.9 | 80.3±1.8 | 77.2±1.6 | 81.3±1.6 | 23.7±6.0 | 30.5±7.0 | | Triplet | 5.00 | 78.3±1.1 | 81.8±1.1 | 74.6±0.7 | 79.9±1.8 | 75.8±1.7 | 80.6±1.7 | 20.0±5.9 | 29.6±7.0 | | Triplet | 7.50 | 78.4±1.1 | 81.8±1.1 | 74.7±0.7 | 79.9±1.8 | 75.7±1.7 | 80.6±1.7 | 20.4±5.9 | 29.5±7.0 | | Triplet | 10.00 | 78.3±1.1 | 81.8±1.1 | 74.7±0.7 | 80.0±1.8 | 75.7±1.7 | 80.7±1.7 | 20.5±5.9 | 29.6±7.0 | Average | - | 80.0 | 83.5 | 76.7 | 81.3 | 78.6 | 82.3 | 26.7 | 33.7 | MultipleNegativesRankingLoss may be unsuitable for fine-tuning toward other objectives as we have less control over the targeted separations between specific sentences. The other loss functions have separate example generation implementations with control over $\lambda$ parameter that defines the margin between similar and dissimilar sentences. Interestingly, independent of the loss function, this value does not necessarily correlate with good model performance. For distinguishing polarity, higher $\lambda$ values resulted in only slightly improved scores for the ContrastiveLoss. For TripletLoss, the opposite is true, contradicting the intuition that the margin of two embeddings in vector space should be separated more rather than less. The subtle differences between the embeddings may thus be small enough for larger margins to be impossible for specific configurations. As for the models, the e5-small scores highest for nearly all configurations, being effective at maximizing both polarity and semantic similarity, as is evident from the average row. For further details on model performance as of the final experiment with 50,000 samples, see Appendix B for the average score across all loss functions per model and Appendix C for details on each loss function, separated on both models and datasets. A final evaluation on the well established SentEval toolkit (Conneau & Kiela, 2018) allows us to compare our models on a series of tasks for the two best-performing baseline models (gte-base and e5-small). Table 7 shows the results of TripletLoss with a margin of 0.1 against a similar training procedure with the SetFit model, both trained with 50,000 samples and sorted by average score. We reuse the suitable metrics from Reimers & Gurevych (2019) for fine-tuning on NLI data. Note how the fine-tuning approach achieves better overall scores and especially so for the MR (Movie Reviews) and SST-2. Our models also transfer well to tasks like SUBJ (subjective/objective classification). Comparing models of different loss functions is challenging due to the different data formats, as we cannot guarantee a direct comparison when the inputs are unequal. Unlike typical research on loss functions, we did not consider the loss values obtained during training or evaluation, as we find these uninformative in this context, i.e., balancing two possibly opposing objectives. However, we argue that our suggested metrics in Section 3.1 are reasonable and intuitive, and can likely be used for further studies on sentence embeddings. Table 7: Performance for the best configuration and SetFit with SentEval. | Model | Dataset | MR | CR | SUBJ | MPQA | SST2 | TREC | avg | |----------------|-----------|-------|-------|-------|-------|-------|-------|-------| | Tripletλ0.1 | gte-base | sst2 | **89.31** | **89.27** | **92.91** | 85.95 | **93.19** | **80.80** | **85.50** | | TripletA0.1 | gte-base | sarcastic | 84.33 | 88.82 | 92.82 | **88.04** | 90.83 | 88.40 | 85.01 | | TripletA0.1 | e5-small | sst2 | 88.95 | 88.98 | 91.06 | 86.28 | 93.41 | 79.80 | 84.97 | | SetFit | gte-base | sst2 | 84.30 | 88.85 | 90.91 | 86.08 | 89.18 | 86.00 | 84.27 | | SetFit | e5-small | sst2 | 85.43 | 85.16 | 86.58 | 83.93 | 91.05 | 88.00 | 82.18 | | SetFit | gte-base | sarcastic | 81.61 | 86.52 | 90.01 | 87.50 | 88.69 | 86.00 | 81.92 | | SetFit | e5-small | sarcastic | 82.69 | 83.97 | 90.65 | 86.80 | 88.80 | **90.20** | 81.62 | | Tripletλ0.1 | e5-small | sarcastic | 82.40 | 76.27 | 90.47 | 85.75 | 89.95 | 71.40 | 78.81 | 6 Conclusion and Future Work This paper has explored the potential of encoding polarity into sentence embeddings while retaining semantic similarity, done by fine-tuning models on data generated to suit the objectives of various sentence-transformers loss functions. We introduced two metrics to evaluate our results: the Polarity and Semantic Similarity Score. We conducted two main experiments. First, we investigated the importance of the number of sample sizes in our modeling scheme, finding that larger sample sizes from the generated data contribute positively towards both metrics. We used a suitable sample size in the second experiment and compared all model and loss function configurations. We found that 1) the e5-small-v2 model outperformed the other baseline models tested (gte-base, gte-small and all-MiniLM-L6-v2), and 2) the TripletLoss, especially for lower λ margins, had the overall best results. We conclude that fine-tuning the e5-small model with TripletLoss using the presented example generation with a margin parameter of λ = 0.1 is likely to yield an efficient and high-performing model for polarity-aware semantic retrieval – here evaluated on binary sentiment and sarcastic news headlines. Future work consists of several paths for improvement: 1) With the suggested model configuration, a broader range of tasks can be experimented with the same fine-tuning approaches beyond sarcastic and sentiment-based data. 2) The example generation process can be extended to support multiclass inputs by one-vs-rest and other methods to manage multiple classes with a system designed for contrasting two samples. 3) Although our proposed metrics are a first step in assessing multiple objectives in this novel context, combining them better to represent the drift of the original semantic similarity remains an open question. REPRODUCIBILITY STATEMENT All code is available in an anonymous repository on the Anonymous GitHub page[^1]. Results and corresponding tables and figures are programmatically generated for efficient reproduction. Sampling operations are fully deterministic, with the use of a defined random state. Source datasets are provided as used after initial preprocessing, and the experiments are logically structured in the source code. Some results are compiled from the resulting logs using wandb (both from the API and local run files), which cannot be included because of personal identifiers. However, code is provided to handle the resulting log files after training to ensure reproducibility. The necessary parsed and anonymized data to reproduce tables and figures is included. ETHICS STATEMENT We have reviewed the ICLR Code of Ethics, and can ensure that our work aligns with its guidelines. Datasets and the pre-trained sentence-transformer models utilized in our experiments are already public and readily available. The final system can be used for automatic retrieval, which may impose ethical concerns, especially when used for public-facing applications. One must thus consider privacy, bias, fairness, and potential misuse of the results. REFERENCES Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, and Weiwei Guo. *SEM 2013 shared task: Semantic textual similarity. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, pp. 32–43, Atlanta, Georgia, USA, June 2013. Association for Computational Linguistics. URL https://aclanthology.org/S13-1004 Mostafa M Amin, Rui Mao, Erik Cambria, and Björn W Schuller. A wide evaluation of chatgpt on affective computing tasks. arXiv preprint arXiv:2308.13911, 2023. Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pp. 1–14, Vancouver, Canada, August 2017. Association for Computational Linguistics. doi: 10.18653/v1/S17-2001. URL https://aclanthology.org/S17-2001 Sanyuan Chen, Yutai Hou, Yiming Cui, Wanxiang Che, Ting Liu, and Xiangzhan Yu. Recall and learn: Fine-tuning deep pretrained language models with less forgetting, 2020. Yung-Sung Chuang, Rumen Dangovski, Hongyin Luo, Yang Zhang, Shiyu Chang, Marin Soljacic, Shang-Wen Li, Wen-tau Yih, Yoon Kim, and James Glass. DiffCSE: Difference-based contrastive learning for sentence embeddings. In Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2022. Alexis Conneau and Douwe Kiela. Senteval: An evaluation toolkit for universal sentence representations. arXiv preprint arXiv:1803.05449, 2018. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine Bordes. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 670–680, Copenhagen, Denmark, September 2017. Association for Computational Linguistics. doi: 10.18653/v1/D17-1070. URL https://aclanthology.org/D17-1070 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics. [^1]: https://anonymous.4open.science/r/polarity-aware-similarity